Next Article in Journal
Dimensional Control of Aircraft Transmission Bodies Using CNC Machines and Neuro-Fuzzy Systems
Previous Article in Journal
Orbit Determination of Resident Space Objects Using the P-Band Mono-Beam Receiver of the Sardinia Radio Telescope
Previous Article in Special Issue
Real-Time RGB-D Simultaneous Localization and Mapping Guided by Terrestrial LiDAR Point Cloud for Indoor 3-D Reconstruction and Camera Pose Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Overview of Lidar Imaging Systems for Autonomous Vehicles

by
Santiago Royo
1,2,* and
Maria Ballesta-Garcia
1
1
Centre for Sensor, Instrumentation and Systems Development, Universitat Politècnica de Catalunya (CD6-UPC), Rambla Sant Nebridi 10, E08222 Terrassa, Spain
2
Beamagine S.L., C/Bellesguard 16, E08755 Castellbisbal, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(19), 4093; https://doi.org/10.3390/app9194093
Submission received: 7 August 2019 / Revised: 12 September 2019 / Accepted: 23 September 2019 / Published: 30 September 2019
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)

Abstract

:
Lidar imaging systems are one of the hottest topics in the optronics industry. The need to sense the surroundings of every autonomous vehicle has pushed forward a race dedicated to deciding the final solution to be implemented. However, the diversity of state-of-the-art approaches to the solution brings a large uncertainty on the decision of the dominant final solution. Furthermore, the performance data of each approach often arise from different manufacturers and developers, which usually have some interest in the dispute. Within this paper, we intend to overcome the situation by providing an introductory, neutral overview of the technology linked to lidar imaging systems for autonomous vehicles, and its current state of development. We start with the main single-point measurement principles utilized, which then are combined with different imaging strategies, also described in the paper. An overview of the features of the light sources and photodetectors specific to lidar imaging systems most frequently used in practice is also presented. Finally, a brief section on pending issues for lidar development in autonomous vehicles has been included, in order to present some of the problems which still need to be solved before implementation may be considered as final. The reader is provided with a detailed bibliography containing both relevant books and state-of-the-art papers for further progress in the subject.

1. Introduction

In the late years, lidar (an acronym of light detection and ranging) has progressed from a useful measurement technique suitable for studies of atmospheric aerosols and aerial mapping, towards a kind of new Holy Grail in optomechanical engineering and optoelectronics. World-class engineering teams are launching start-ups and receiving relevant investments, and companies previously established in the field are being acquired by large industrial corporations, mainly from the automotive industry, or receiving heavy investments from the venture capital sector. The fuel of all this activity is the lack of an adequate solution in all aspects for lidar imaging systems for automobile either because of performance, lack of components, industrialization or cost issues. This has resulted in one of the strongest market-pull cases of the late years in the optics and photonics industries [1], a story which has even appeared in business journals such as Forbes [2], where sizes of the market in the billion unit level are claimed for the future Level 5 (fully automated) self-driving car.
Lidar has been a well-known measurement technique since the past century, with established publications and a dense bibliography corpus. Lidar stands on a simple working principle based on counting the time between events in magnitudes carried out by light, such as e.g., backscattered energy from a pulsed beam. From these time measurements, the speed of light in air is used to compute distances or to perform mapping. Quite logically, this is referred to as the time-of-flight (TOF) principle. Remote sensing has been one of the paramount applications of the technology, either from ground-based stations (e.g., for aerosol monitoring) or as aerial or space-borne instrumentation, typically for Earth observation applications in different wavebands. A number of review papers [3], introductory texts [4] or comprehensive books [5,6] have been made available on the topic across the years. Lidar had become already so relevant that a full industry related to remote sensing and mapping was developed. The relevance of the lidar industry is further shown by the existence of standards, including a dedicated data format and file extension for lidar mapping (.LAS, from laser) which has become an official standard for 3D point cloud data exchange beyond the sensor and the software which generates the point clouds. Despite its interest and the proximity of the topic, it is not the goal of this paper to deal with remote sensing or mapping lidars, even if they provide images. They are already well-established, blooming research fields on their own with long track records, optimized experimental methods and dedicated data processing algorithms, published in specialized journals [7] and conferences [8].
However, a very relevant number of applications on the ground benefit from the capability of a sensor to capture the complete 3D information around a vehicle. Thus, still using time-counting of different events in light, first research-based, then commercial 3D cameras started to become available, and rapidly found applications in audiovisual segmentation [9], RGB+depth fusion [10], 3D screens [11], people and object detection [12] and, of course, in human-machine interfaces [13], some of them reaching the general consumer market, being the best-known example the Microsoft Kinect [14]. These commercial units were in almost all cases amplitude-modulated cameras where the phase of the emitted light was compared with the received one in order to compare their phase difference to estimate distance. A niche was covered, as solutions were optimal for indoor applications without a strong solar background, and although the spatial resolution of the images was limited, and depth resolution was kept in the cm level, they were still useful for applications involving large-scale objects (like humans).
In the meanwhile, a revolution incubated in the mobility industry in the shape of the self-driving car, which is expected to completely disrupt our mobility patterns. A fully autonomous car, with the driver traveling on the highway without neither steering wheel nor pedals, and without the need to monitor the vehicle at all has become a dream technically feasible. Its social advantages are expected to be huge, centered in the removal of human error from driving, with an expected 90% decrease in fatalities, and with relevant improvements related to the reduction of traffic jams and fuel emission, while enabling access to mobility for the aging and disabled populations. New ownership models are expected to appear and several industries (from automotive repair to parking, not forgetting airlines and several others) are expected to be disrupted, with novel business models arising from a social change comparable only to the introduction of the mobile phone. Obviously, autonomous cars will be the first due to the market size they represent, but other autonomous vehicles on the ground, in the air and sea will progressively become partially or completely unmanned, from trains to vessels. This change of paradigm, however, needs reliable sensor suites able to completely monitor the environment of the vehicle, with systems based on different working principles and with different failure modes to anticipate all possible situations. A combination of radar, video cameras, and lidar combined with deep learning procedures is the most likely solution for a vast majority of cases [15]. Lidar, thus, will be at the core of this revolution.
However, the rush towards the autonomous car and robotic vehicles has forced the requirements of lidar sensors into new directions from those of remote sensing. Lidar imaging systems for automotive require a combination of long-range, high spatial resolution, real-time performance and tolerance to solar background in the daytime, which has pushed the technology to its limits. Different specifications with different working principles have appeared for different possible usage cases, including short and long-range, or narrow and wide fields of view. Rotating lidar imagers were the first to achieve the required performances, using a rotating wheel configuration at high speed and multiple stacked detectors [16]. However, large-scale automotive applications required additional performance, like the capability to industrialize the sensor to achieve reliability and ease of manufacturing in order to get a final low-cost unit; or to have a small, nicely packaged sensor fitting in small volumes of the car. It was soon obvious that different lidar sensors were required to cover all the needs of the future self-driving car, e.g., to cover short and long-range 3D imaging with different needs regarding fields of view. Further, the uses of such a sensor in other markets, such as robotics or defense applications, has raised a quest for the final solid-state lidar, in which different competing approaches and systems have been proposed, a new proposal of set-up appears frequently, and a patent family even more frequently.
This paper intends to neutrally introduce the basic aspects of lidar imaging systems applied to autonomous vehicles, especially regarding automobiles, which is the largest and fastest developing market. Due to the strong activity in the field, our goal has been more to focus on the basic details of the techniques and components currently being used, rather than to propose a technical comparison of the different solutions involved. However, we will comment on the main advantages and disadvantages of each approach, and try to provide further bibliography on each aspect for interested readers. Furthermore, an effort has been done in order to skip mentioning the technology used by each manufacturer as this hardly could be complete, would be based on assumptions in some cases and could be subject to fast changes. To those interested, there are excellent reports which identify the technology used for each manufacturer at the moment when they were written [1].
The remaining of the paper has been organized as follows. Section 2 has been devoted to introducing the basics of the measurement principles of lidar, covering, in the first section, the three most used techniques for lidar imaging systems, which involve pulsed beams, amplitude-modulated and frequency-modulated approaches. A second subsection within Section 2 will cover the strategies used to move from the point-like lidar measurement just described to an image-like measurement covering a field of view (FOV). In Section 3, we will cover the main families of light sources and photodetectors currently used in lidar imaging units. Section 4 will be devoted to briefly review a few of the most relevant pending issues currently under discussion in the community, which need be solved before a given solution is deployed commercially. A final section will outline the main conclusions of this paper.

2. Basics of Lidar Imaging

The measurement principle used for imaging using lidar is time-of-flight (TOF), where depth is measured by counting time delays in events in light emitted from a source. Thus, lidar is an active, non-contact range-finding technique, in which an optical signal is projected onto an object we call the target and the reflected or backscattered signal is detected and processed to determine the distance, allowing the creation of a 3D point cloud of a part of the environment of the unit. Hence, the range R or distance to the target is measured based on the round-trip delay of light waves that travel to the target. This may be achieved by modulating the intensity, phase, and/or frequency of the transmitted signal and measuring the time required for that modulation pattern to appear back at the receiver. In the most straightforward case, a short light pulse is emitted towards the target, and the arrival time of the pulse’s echo at the detector sets the distance. This pulsed lidar can provide resolutions around the centimeter-level in single pulses over a wide window of ranges, as the nanosecond pulses used often have high instantaneous peak power. This enables to reach long distances while maintaining average power below the eye-safety limit. A second approach is based on amplitude modulation of a continuous wave (AMCW), so the phase of the emitted and backscattered detected waves are compared enabling to measure distance. A precision comparable to that of the pulsed technique can be achieved but only at moderate ranges, due to the short ambiguity distance imposed by the 2 π ambiguity in frequency modulation. The reflected signal arriving at the receiver coming from distant objects is also not as strong as in the pulsed case, as the emission is continuous, which makes the amplitude to remain below the eye-safe limit at all times. Further, the digitization of the back-reflected intensity level becomes difficult at long distances. Finally, a third approach is defined by frequency-modulated continuous-wave (FMCW) techniques, enabled by direct modulation and demodulation of the signals in the frequency domain, allowing detection by a coherent superposition of the emitted and detected wave. FMCW presents two outstanding benefits ahead of the other techniques: it achieves resolutions in range measurement well below those of the other approaches, which may be down to 150 μ m with 1 μ m precision at long distances, although its main benefit is to obtain velocimetry measurements simultaneously to range data using the Doppler effect [17]. The three techniques mentioned are briefly discussed in the coming subsection.

2.1. Measurement Principles

2.1.1. Pulsed Approach

Pulsed TOF techniques are based on the simplest modulation principle of the illumination beam: distance is determined by multiplying the speed of light in a medium by the time a light pulse takes to travel the distance to the target. Since the speed of light is a given constant while we stay within the same optical medium, the distance to the object is directly proportional to the traveled time. The measured time is obviously representative of twice the distance to the object, as light travels to the target forth and back, and, therefore, must be halved to give the actual range value to the target [13,18,19]:
R = c 2 t o F ,
where R is the range to the target, c is the speed of light ( c = 3 × 10 8 m/s) in free space and t o F is the time it takes for the pulse of energy to travel from its emitter to the observed object and then back to the receiver. Figure 1 shows a simplified diagram of a typical implementation. Further technical details of measuring t O F can be found in references like [18,20].
The attainable resolution in range ( Δ R m i n ) is directly proportional to the resolution in time counting available ( Δ t m i n ). As a consequence, the resolution in depth measurement is dependent on the resolution in the time counting electronics. A typical resolution value of the time interval measurement can be assumed to be in the 0.1 ns range, resulting in a resolution in depth of 1.5 cm. Such values may be considered as the current reference, limited by jitter and noise in the time-counting electronics. Significant improvements in resolution may be obtained by using statistics [21], but this requires several pulses per data point, degrading sensor performance in key aspects like frame rate or spatial resolution.
Theoretically speaking, the maximum attainable range ( R m a x ) is only limited by the maximum time interval ( t m a x ) which can be measured by the time counter. In practice, this time interval is large enough so the maximum range becomes limited by other factors. In particular, the laser energy losses during travel (especially in diffusing targets) combined with the high bandwidth of the detection circuit (which brings on larger noise and jitter) creates a competition between the weak returning signal and the electronic noise, making the signal-to-noise ratio (SNR) the actual range limiting factor in pulsed lidars [22,23]. Another aspect to be considered concerning maximum range is the ambiguity distance (the maximum range which may be measured unambiguously), which in the pulsed approach is limited by the presence of more than one simultaneous pulse in flight, and thus is related to the pulse repetition rate of the laser. As an example, this ambiguity value ranges to 150 m at repetition rates of the laser close to the MHz range using Equation (1).
The pulsed principle directly measures the round trip time between light pulse emission and the return of the pulse-echo resulting from its backscattering from a target object. Thus, pulses need to be as short as possible (usually a few nanoseconds) with fast rise and fall times and large optical power. Because the pulse irradiance power is much higher than the background (ambient) irradiance power, this type of method performs well outdoors (although it is also suitable for indoor applications, where the absence of solar background will reduce the requirements on emitted power), under adverse environmental conditions, and can work for long-distance measurements (from a few meters up to several kilometers). However, once a light pulse is emitted by a laser and reflected onto an object, only a fraction of the optical energy may be received back at the detector. Assuming the target is an optical diffuser (which is the most usual situation), this energy is further divided among multiple scattering directions. Thus, pulsed methods need very sensitive detectors working at high frequencies to detect the faint pulses received. As long as, generally speaking, pulsed methods deal with direct energy measurements, they are an incoherent detection case [23,24,25].
The advantages of the pulsed approach include a simple measurement principle based on direct measurement of time-of-flight, its long ambiguity distance, and the limited influence of background illumination due to the use of high energy laser pulses. However, it is limited by the signal-to-noise ratio (SNR) of the measurement, where intense light pulses are required while eye-safety limits need be kept, and very sensitive detectors need to be used, which may be expensive depending on the detection range. Large amplification factors in detection, together with high-frequency rates add on significant complexity to the electronics. The pulsed approach is, despite these limitations, the one most frequently selected at present in the different alternatives presented by manufacturers of lidar imaging systems for autonomous vehicles, due to its simplicity and its capability to perform properly outdoors.

2.1.2. Continuous Wave Amplitude Modulated (AMCW) Approach

The AMCW approach consists of using the intensity modulation of a continuous lightwave instead of the laser pulses mentioned before. This principle is known as CW modulation, phase-measurement, or amplitude-modulated continuous-wave (AMCW). It uses the phase-shift induced in an intensity-modulated periodic signal in its round-trip to the target in order to obtain the range value. The optical power is modulated with a constant frequency f M , typically of some tenths of MHz, so the emitted beam is a sinusoidal or square wave of frequency f M . After reflection from the target, a detector collects the received light signal. Measurement of the distance R is deduced from the phase shift Δ Φ occurring between the reflected and the emitted signal [18,26,27]:
Δ Φ = k M d = 2 π f M c 2 R R = c 2 Δ Φ 2 π f M ,
where R and c are, again, the range to the target and the speed of light in free space; k M is the wavenumber associated to the modulation frequency, d is the total distance travelled and f M is the modulation frequency of the amplitude of the signal. Figure 2 shows the schematics of a conventional AMCW sensor.
There are a number of techniques that may be used to demodulate the received signal and to extract the phase information from it. For the sake of brevity, they will only be cited so the reader is referred to the linked references. For example, phase measurement may be obtained via signal processing techniques using mixers and low-pass filters [28], or, more generally, by cross-correlation of the sampled signal backscattered at the target with the original modulated signal shifted by a number (typically four) of fixed phase offsets [13,19,29,30]. Another common approach is to sample the received modulated signal and mix it with the reference signal, to then sample the resultant signal at four different phases [31]. The different types of phase meters are usually implemented as electronic circuitry of variable complexity.
In the AMCW approach, the resolution is determined by the frequency f M of the actual ranging signal (which may be adjusted) and the resolution of the phase meter fixed by the electronics. By increasing f M , the resolution is also increased if the resolution in the phase meter is fixed. However, larger f M frequencies bring on shorter unambiguous range measurements, meaning the phase value of the return signal at different range values starts to repeat itself after a 2 π phase displacement. Thus, a significant trade-off appears between the maximum non-ambiguous range and the resolution of the measurement. Typical modulation frequencies are generally in the few tenths of MHz range. Approaches using advanced modulated-intensity systems have been proposed, which deploy multi-frequency techniques to extend the ambiguity distance without reducing the modulation frequency [23].
Further, even though phase measurement may be coherent in some domains, the sensitivity of the technique remains limited because of the reduced sensitivity of direct detection in the optical domain. From the point of view of the SNR, which is also related to the depth accuracy, a relatively long integration time is required over several time periods to obtain an acceptable signal rate. In turn, this introduces motion blur in the presence of moving objects. Due to the need for these long integration times, fast shutter speeds or frame rates are difficult to obtain [24,27].
AMCW cameras have, however, been commercialized since the 90s [32], and are often referred to as TOF cameras. They are usually implemented as parallel arrays of emitters and detectors, as discussed in Section 2.2.2, being limited by the range-ambiguity trade-off, and the limited physical integration capability of the phase-meter electronics, which is done pixel by pixel or for a group of pixels in the camera. This limits the spatial resolution of the point clouds to a few thousands of pixels. Furthermore, AMCW modulation is usually implemented on LEDs rather than lasers, which further limits the available power and thus the SNR of the signal and the attainable range, already limited by the ambiguity distance. Moreover, the amplitude of the signal is measured reliably on its arrival, and, in some techniques digitized, at a reasonable number of intensity levels. As a consequence, TOF cameras have little use outdoors, although they show excellent performance indoors, specially for large objects, and have been applied to a number of industries including audiovisual, interfacing and video-gaming [27]. They have also been used inside vehicles in different applications, like passenger or driver detection and vehicle interfacing [33].

2.1.3. Continuous Wave Frequency Modulated (FMCW) Approach

In the case of the FMCW approach, the emitted instantaneous optical frequency is periodically shifted, usually by varying the power applied to the source [34]. The reflected signal is mixed with the emitted source, creating a beat frequency that is a measure of the probed distance [35]. The source is normally a diode laser to enable coherent detection. The signal is then sent to the target, and the reflected signal that arrives at the receiver, after a traveled time t O F , is mixed with a reference signal built from the emitter output. For a static target, the delay between the collected light and the reference causes a constant frequency difference f r , or beat frequency, from the mixed beams. Letting the instantaneous frequency vary under a linear law, f r is directly proportional to t O F and hence proportional to the target range too [26,36,37], following:
f r = s l o p e · Δ τ = B T t o F = B T 2 R c R = f r c T 2 B ,
where B is the bandwidth of the frequency sweep, T denotes the period of the ramp, and Δ τ equals the total travelled time t o F . Figure 3 depicts all these parameters.
In practice, the frequency difference between the outgoing and incoming components is translated into a periodic phase difference between them, which causes an alternating constructive and destructive interference pattern at the frequency f r , i.e., a beat signal at frequency f r . By using FFT to transform the beat signal in time domain to frequency domain, the peak of beat frequency is easily translated into distance.
Usually, a triangular frequency modulation is used (Figure 4) rather than a ramp. The modulation frequency in this case is denoted as f m . Hence, the rate of frequency change can be expressed as 2 f m B [38], and the resulting beat frequency is given by:
f r = 4 R f m B c
This type of detection has the very relevant advantage of adding on the capability of measuring not only range but also, using the same signal, the velocity of the target and its sign. If the target moves, the beat frequency obtained will not only be related to R, but also to the velocity ν r of the target relative to the sensor. The velocity contribution is taken into account by the Doppler frequency f d , which will affect the sweep of the beat frequency up or down (Figure 5). Thus, beat frequency components are superimposed to f r , following [39]:
f + = f r + f d and f = f r f d .
In this case, range can be obtained from:
R = c T 4 B f + + f ,
while relative velocity and its direction can also be calculated using the Doppler effect:
ν r = λ 2 f d = λ 4 f + f ,
showing the ability of FMCW to simultaneously measure range and relative velocity using the properties of the Fourier spectra [38,40].
FMCW takes advantage of the large frequency bandwidth available in the optical domain and exploits it to improve the performance of the range sensor. The resolution of the technique is now related to the total bandwidth of the signal. Since the ramp period can be chosen arbitrarily, the FMCW method can determine t o F values in the picosecond range, equivalent to millimeter or even submillimeter distances, by performing frequency measurements in the kilohertz regime, which is perfectly feasible. Resolutions of 150 μ m have been reported, which is an improvement of two orders of magnitude relative to the other approaches. Unfortunately, in general, a perfect linear or triangular optical frequency sweep cannot be realized by a linear modulation of the control current and, in addition, the frequency-current curve is in general nonlinear, especially close to the moment of slope change. As a consequence, deviations from the linear ramp usually occur, which, in turn, bring on relevant variations in f r . Moreover, the range resolution depends on the measurement accuracy of f r and also on the accuracy with which the modulation slope is controlled or known [17,26,36].
The FMCW method is fundamentally different from the two previous approaches because of the use of the coherent (homodyne) detection scheme in the Fourier domain, rather than the incoherent intensity detection schemes described until now for time counting or phase measurement approaches [17]. FMCW has shown to be useful in outdoor environments and to have improved resolution and long-range values relative to the pulsed and AMCW approaches. Its main benefit in autonomous vehicle applications is its ability to sense s i m u l t a n o u s l y the speed value, and its direction together with range. However, its coherent detection scheme poses potential problems related to practical issues like coherence length (interference in principle requires the beam to be within coherence length in the full round-trip to the target), availability of durable tunable lasers with good temperature stability, external environmental conditions, accuracy of the modulation electronics, or linearity of the intensity-voltage curve of the laser, which require advanced signal processing. Although it is not the principal approach in lidar imaging systems for autonomous vehicles, some teams are currently implementing lidar solutions based on FMCW in commercial systems due to its differential advantages, and clearly winning presence when compared to the rest of approaches we described.

2.1.4. Summary

The three main measurement principles share the goal of measuring TOF but have very different capabilities and applications (see Table 1). The pulsed approach is based on an incoherent measurement principle, as it is based on the detection of intensity, and can show resolutions at the cm level, being operative under strong solar background with large ambiguity distances. Its main advantage is the simplicity of the setup based on indirect detection of intensity, which is stable, robust and has available general-purpose components. The main disadvantage involves the limit in range due to low SNR at long ranges and the emission limit fixed by eye-safety levels. AMCW methods have been commercialized several years ago and are exceptionally well-developed and efficient in indoor environments. They have stable electronics architectures working in parallel in every pixel and are based on the well-known CMOS technology. They present an equivalent resolution to those of pulsed lidars, and the complexity of determining the reliability of the phase measurements using low SNR signals. Further, this SNR limitation constrains their applications outdoors. Finally, the FMCW approach presents relevant advantages which appear to place them as the future natural choice for autonomous vehicles, as its coherent detection scheme enables improvements in resolution of range measurements between one and two orders of magnitude when compared to the other methods, and the use of FFT signal processing enables to measure speed of the target simultaneously. Despite these advantages, it is a coherent system that needs to be significantly stable in its working conditions to be reliable, and aspects like temperature drift or linearity of electronics become important, which is significant for an application that demands robustness and needs units performing stably for several years.

2.2. Imaging Strategies

Once the three main measurement strategies used in lidar imaging systems have been presented, it is worth noting all of them have been presented as pointwise measurements. However, lidar images of interest are always 3D point clouds, which achieve accurate representations of fields of view as large as 360 around the object of interest. A number of strategies have been proposed in order to build lidar images out of the repetition of point measurements, but they can essentially be grouped into three different families: scanning components of different types, detector arrays, and mixed approaches. Scanning systems are used to sweep a broad number of angular positions of the field of view of interest using some beam steering component, while detector arrays exploit the capabilities of electronic integration of detectors to create an array of receiving elements, each one capturing illumination from separate angular sections of the scene to deliver a t O F value for each individual detector. Some of the strategies have also been successfully combined with each other, depending on the measurement approach or requirements, and are briefly discussed in a section devoted to mixed approaches.

2.2.1. Scanners

Currently, in the automotive lidar market, most of the proposed commercial systems rely on scanners of different types [41,42,43,44]. In the most general approach, the scanner element is used to re-position the laser spot on the target by modifying the angular direction of the outgoing beam, in order to generate a point cloud of the scene. This poses questions related to the scanning method, its precision, its speed, its field of view, and its effect on the beam footprint on the target, which directly affects the spatial resolution of the images [45].
A large number of scanning strategies have been proposed for lidar and laser scanning, and are currently effective in commercial products. This includes e.g., galvanometric mirrors [46] or Risley prisms [47]. When it comes to its use in autonomous vehicles, three main categories may be found: mechanical scanners, as described, which use rotating mirrors and galvanometric or piezoelectric positioning of mirrors and prisms to perform the scanning; micro-electromechanical system (MEMS) scanners, which use micromirrors actuated using electromagnetic or piezoelectric actuators to scan the field of view, often supported by expanding optics; and optical phased arrays (OPAs), which perform pointing of the beam based on a multibeam interference principle from an array of optical antennas. Direct comparison of scanning strategies for particular detector configurations has been proposed [42]. Although we will focus on these three large families, it is worth noting that other approaches have been proposed based on alternative working principles such as liquid crystal waveguides [48], electrowetting [49], groups of microlens arrays [50] and even holographic diffraction gratings [51].
(a) Mechanical scanners
Lidar imaging systems based on mechanical scanners use high-grade optics and some kind of rotating or galvanometric assembly, usually with mirrors or prisms attached to mechanical actuators, to cover a wide field of view. In the case of lidar, units with sources and detectors jointly rotate around a single axis. This may be done by sequentially pointing the beam across the target in 2D, as depicted in Figure 6, or by rotating the optical configuration around a mechanical axis, in which case a number of detectors may be placed in parallel along the spinning axis. In this latter case 360 deg FOVs of the sensor may be achieved, covering the whole surroundings of the vehicle. The mirrors or prisms used may be either rotating or oscillating, or may be polygon mirrors. This is the most popular scanning solution for many commercial lidar sensors, as it provides straight and parallel scan lines with a uniform scanning speed over a vast FOV [52,53], or angularly equispaced concentric data lines. In rotating mirrors, the second dimension is usually obtained by adding more sources and/or detectors to measure different angular directions simultaneously. Further, the optical arrangement for rotating lidar units may be simple and extremely efficient in order to collect faint diffuse light and thus achieve very long ranges as there is some margin in the size of the collection optics, especially when compared to other approaches.
Lidars with mechanical scanners work almost always with pulsed sources and are usually significantly large and bulky. In the case of rotating mirrors, they can achieve large spatial resolution in the direction of turn (usually horizontal) although they become limited in the orthogonal direction (usually vertical) where the density of the point cloud is limited by the number of available sources and detectors measuring in parallel. Further, they need rather high power consumption and, due to the large inertia of the rotating module, the frame rate is limited (frame rate goes from below 1 Hz to about 100 Hz). They are, however, very efficient in long-range applications (polygon mirrors combined with coaxial optical systems easily reach distances well beyond 1 km). Despite the current prevalence of these type of scanners, the setup presents numerous disadvantages in a final consumer unit, in particular the question of reliability and maintenance of the mechanisms, the mass and inertia of the scanning unit which limits the scanning speed, the lack of flexibility of the scanning patterns, and the issue of being misalignment-prone under shock and vibration, beyond being power-hungry, hardly scalable, bulky and expensive. Although several improvements are being introduced in these systems [43], there is a quite general agreement that mechanically scanning lidars need to move towards a solid-state version. However, they are currently providing one of the performances closest to the one desired for the final long-range lidar unit, and they are commercially available from different vendors using different principles. This makes them the sensor of choice for autonomous vehicle research and development, such as algorithm training [54], autonomous cars [55], or robotaxis [56]. Their success also has brought a number of improvements in geometry, size and spatial resolution to make them more competitive as a final lidar imaging system in automotive.
(b) Microelectromechanical scanners
Microelectromechanical systems (MEMS)-based lidar scanners enable programmable control of laser beam position using tiny mirrors with only a few mm in diameter, whose tilt angle varies when applying a stimulus, so the angular direction of the incident beam is modified and the light beam is directed to a specific point in the scene. This may be done bidimensionally (in 2D) so a complete area of the target is scanned. Various actuation technologies are developed including electrostatic, magnetic, thermal and piezoelectric. Depending on the applications and the required performance (regarding scanning angle, scanning speed, power dissipation or packaging compatibility), one or another technology is chosen. The most common stimulus in lidar applications based on MEMS scanners is voltage; the mirrors are steered by drive voltages generated from a digital representation of the scan pattern stored in a memory. Then, digital numbers are mapped to analog voltages with a digital-to-analog converter. However, electromagnetic and piezoelectric actuation has also been successfully reported for lidar applications [52,57,58,59].
Thus, MEMS scanners substitute macroscopic mechanical-scanning hardware with an electromechanical equivalent reduced in size. A reduced FOV is obtained compared to the previously described rotary scanners because they have no rotating mechanical components. However, using multiple channels and fusing their data allows us to create FOVs and point cloud densities able to rival or improve mechanical lidar scanners [60,61]. Further, MEMS scanners have typically resonance frequencies well above those of the vehicle, enhancing maintenance and robustness aspects.
MEMS scanning mirrors are categorized into two classes according to their operating mechanical mode: resonant and non-resonant. On one hand, non-resonant MEMS mirrors (also called quasi-static MEMS mirrors) provide a large degree of freedom in the trajectory design. Although a rather complex controller is required to keep the scan quality, desirable scanning trajectories with constant scan speed at large scan ranges can be generated by an appropriate controller design. Unfortunately, one key spec, such as the scanning angle, is quite limited in this family compared to resonant MEMS mirrors. Typically, additional optomechanics are required in order to enlarge the scan angle, adding optical aberrations like distortion to the scanning angle. On the other hand, resonant MEMS mirrors provide a large scan angle at a high frequency and a relatively simple control design. However, the scan trajectory is sinusoidal, i.e., the scan speed is not uniform. However, their design needs to strike a balance where the combination of scan angle, resonance frequency, and mirror size is combined for the desired resolution, while still keeping the mirror optically flat to avoid additional image distortions which may affect the accuracy of the scan pattern [52,62,63]. Laser power handling at the surface of the mirror is also an issue that needs be carefully taken into account to avoid mirror damage, especially in long-range units.
A 2D laser spot projector can be implemented either with a single mirror with two oscillation axes or using two separate, orthogonal mirrors oscillating each along one axis. Single-axis scanners are simpler to design and fabricate, and are also far more robust to vibration and shock; however, dual-axis scanners provide important optical and packaging advantages, essentially related to the simplicity of the optical arrangement and accuracy required in the relative alignment of the two single-axis mirrors. One crucial difficulty with dual-axis scanners is the crosstalk between the two axes. With the increasing requirements of point cloud resolution, it becomes harder to keep the crosstalk to acceptable levels, which would be a reason to choose the bulkier and more complex two-mirror structure. The most common system architecture is raster scanning, where a low frequency, linear vertical scan (quasi-static) is paired with an orthogonal high frequency, resonant horizontal scan. For the raster scanner, the fast scanner will typically run at frequencies of some KHz (typically exceeding 10 KHz), and should provide a large scan angle [63,64].
A detailed overview of MEMS laser scanners is provided in [62], where the topics covered previously may be found expanded. Extended discussions on accuracy issues in MEMS mirrors may also be found in [63].
Due to its promising advantages (in particular being lightweight, compact and with low power consumption) MEMS-based scanners for lidar have received increasing interest for their use in automotive applications. MEMS-based lidar imaging systems have in parallel shown the feasibility of the technology in different scenarios such as space applications and robotics. Currently, the automotive utilization of MEMS for lidar is in growing development by a number of companies [52,65,66,67,68] becoming one of the preferred solutions at present.
(c) Optical Phased Arrays
An optical phased array (OPA) is a novel type of solid-state device that enables to us steer the beam using a multiplicity of micro-structured waveguides. Its operating principle is equivalent to that of microwave phased arrays, where the beam direction is controlled by tuning the phase relationship between arrays of transmitter antennas. By aligning the emitters’ phases of several coherent emitters, the emitted light interferes constructively in the far-field at certain angles enabling to steer the beam (Figure 7). While phased arrays in radio were first explored more than a century ago, optical beam steering by phase modulation has been demonstrated first in the late 1980s [69,70,71,72].
In an OPA device, an optical phase modulator controls the speed of light passing through the device. Regulating the speed of light enables control of the shape and orientation of the wave-front resulting from the combination of the emission from the synced waveguides. For instance, the top beam is not delayed, while the middle and bottom beams are delayed by increasing amounts at will. This phenomenon effectively allows the deflection of a light beam, steering it in different directions. OPAs can achieve very stable, rapid, and precise beam steering. Since there are no mechanical moving parts at all, they are robust and insensitive to external constraints such as acceleration, allowing extremely high scanning speeds (over 100 kHz) over large angles. Moreover, they are highly compact and can be stored in a single chip. However, the insertion loss of the laser power in the OPA is still a drawback, [4,73] as it is their current ability to handle the large power densities required for long-range lidar imaging.
OPAs have gained interest in recent years as an alternative to traditional mechanical beam steering or MEMS-based techniques because they completely lack inertia, which limits the ability to reach a large steering range at high speed. Another relevant advantage is brought by the fact that steering elements may be integrated with an on-chip laser. As a developing technology with high potential, the interests on OPA for automotive lidar is growing in academia and industry, even though OPAs are still under test for long-range lidar. However, they are operative in some commercially available units targeting shorter and mid ranges [74]. Recently, sophisticated OPAs have been demonstrated with performance parameters that make them seem suitable for high power lidar applications [71,75,76]. The combination of OPAs and FMCW detection has got the theoretical potential to drive a lidar system fully solid-state and on-chip, which becomes one of the best potential combinations for a lidar unit. An on-chip lidar fabricated using combined CMOS and photonics manufacturing, if industrialized, has huge potential regarding reliability and reduced cost. However, both technologies, and in particular OPA, need further development from its current state.

2.2.2. Detector Arrays

Due to the lack of popularity in the automotive market of scanning lidar approaches based on moving elements, alternative imaging methods have been proposed to overcome their limitations beyond MEMS scanners and OPAs. These scannerless techniques typically combine specialized illumination strategies with arrays of receivers. Transmitting optical elements illuminate a whole scene and a linear array (or matrix) of detectors receives the signals of separate angular subsections in parallel, allowing to obtain range data of the target in a single-shot (Figure 8) making it easy to manage real-time applications. The illumination may be pulsed (flash imagers) or continuous (AMCW or FMCW lidars).
With the exception of FMCW lidars, where coherent detection enables longer ranges, flash imagers or imagers based on AMCW principle (TOF cameras) are limited to medium to short ranges. In flash lidars the emitted light pulse is dispersed in all directions, significantly reducing the SNR, while in TOF cameras the phase ambiguity effect limits the measured ranges to a few meters. A brief description of the basic working principle is provided next.
(a) Flash imagers
One very successful architecture for lidar imaging systems in autonomous vehicles is flash lidar, which has progressed to a point where it is very close to commercial deployment in short and medium-range systems. In a flash lidar, imaging is obtained by flood-illuminating a target scene or a portion of a target scene using pulsed light. The backscattered light is collected by the receiver which is divided among multiple detectors. They respond to the schematics in Figure 8, considering a pulsed source and appropriate expanding optics to expand the beam to cover the scene of interest. Each detector captures the image distance, and sometimes the reflected intensity using the conventional time-of-flight principle. Hence, both the optical power imaged onto a 2-D array of detectors and the 3D point cloud are directly obtained with a single laser blast on the target [45,77,78].
In a flash-lidar system, the FOV of the array of detectors needs to closely match the illuminated region on the scene. This is usually achieved by using an appropriate divergent optical system which expands the laser beam to illuminate the full FOV. The beam divergence of the laser is usually arranged to optically match the receiver FOV, allowing us to illuminate all the pixels in the array at once. Each detector in the array is individually triggered by the arrival of a pulse return, and measures both its intensity and the range. Thus, the spatial resolution strongly depends on the resolution of the camera, that is, on the density with which the detectors have been packed, usually limited by CMOS technology patterning. The resolution in z depends, usually, on the pulse width and the accuracy of the time counting device. Typically, the spatial resolution obtained is not very high due to the size and cost of the focal plane array used. Values around some tenths of Kilopixels are usual [42], limited by cost and size of the detector (as they usually work at 1.55 μ m, and are thus InGaAs based. Resolution in depth and angular resolution are comparable to scanning lidars, even better in some arrangements.
Since the light intensity from the transmitter is dispersed with a relatively large angle to cover the full scene, and such a value is limited by eye-safety considerations, the measurement distance is dependent on sensing configurations including aspects like emitted power, sensor FOV and detector type and sensitivity. It can vary from tenths of meters to very long distances, although at present they are used in the 20 m to 150 m range in automotive. Range and spatial resolution of the measurement obviously depend on the FOV considered for the system, which limits the entrance pupil of the optics and the area where the illuminator needs to spread power enough to be detected afterwards. The divergence of the illuminating area and the backscattering at the target significantly reduce the amount of optical power available, so very high peak illumination power and very sensitive detectors are required in comparison to single-pixel scanners. Detectors are usually SPADs (single photon avalanche diodes), discussed in Section 3.2.2. This has caused the present flash setups to keep being concentrated in sensing at medium or short-range applications in autonomous vehicles, where they take advantage of their lack of moving elements, and they have acceptable costs in mass production due to the simplicity of the setup [52].
Eye-safety considerations are significant in these flood illumination approaches [52]. However, the main disadvantage of flash imagers comes from the type of response of the detectors used, which are used in Geiger mode and thus produce a flood of electrons at each detection. The presence of retro-reflectors in the real-world environment, designed to reflect most of the light and backscatter very little to be visible at night when illuminated by car headlamps, is a significant problem to these cameras. In roads and highways, for instance, retro-reflectors are routinely used in traffic signs and license plates. In practice, retro-reflectors flood the SPAD detector with photons, saturating it, and blinding the entire sensor for some frames, rendering it useless. Some schemes based on interference have been proposed to avoid such problems [79]. Issues related to mutual interference of adjacent lidars, where one lidar detects the illumination pattern of the other, are also expected to be a hard problem to solve in flash imagers. On the positive side, since flash lidars capture the entire scene in a single image, the data capture rate can be very fast, so the method is very resilient to vibration effects and movement artifacts, which otherwise could distort the image. Hence, flash lidars have proven to be useful in specific applications such as tactical military imaging scenarios where both the sensor platform and the target move during the image capture, a situation also common in vehicles. Other advantages include the elimination of scanning optics and moving elements and potential for creating a miniaturized system [80,81]. This has resulted in the existence of systems based on flash lidars effectively being commercialized at present in the automotive market [82].
(b) AMCW cameras
A second family of lidar imaging systems that use detector arrays are the TOF cameras based on the AMCW measuring principle. As described in Section 2.1.2, these devices modulate the intensity of the source and then measure the phase difference between the emitter and the detected signal at each pixel on the detector. This is done by sampling the returned signal at least four times. Detectors for these cameras are manufactured using standard CMOS technology, so they are small and low-cost, based on well-known technology and principles, and capable of short-distance measurements (from a few centimeters to several meters) before they enter into redundancy issues due to the periodicity of the modulation. Typical AMCW lidars with a range of 10 m may be modulated at around 15 MHz, thus requiring that the signal at each pixel to be sampled at around 60 MHz. Most commercially available TOF cameras operate by modulating the light intensity in the near-infrared (NIR) and use arrays of detectors where each pixel or group of pixels incorporates its own phase meter electronics. This in practice poses a maximum value on the spatial resolution of the scenes to the available capabilities of lithographic patterning, being most of them limited to some tenths of thousands of points per image. It is a limit comparable to that of the flash array detectors just mentioned, also due to manufacturing limitations. However, TOF cameras perform usually in the NIR region, so their detectors are based on silicon, while flash imagers are more often working at 1.55 μ m for eye-safety considerations. The AMCW strategy is less suitable for outdoor purposes due to the effects of background light on SNR and the number of digitization levels needed to reliably measure the phase of the signal. However, they have been widely used indoor in numerous other fields such as robotics, computer vision, and home entertainment [83]. In automobiles, they have been proposed for occupant monitoring in the car. A detailed overview of current commercial TOF cameras is provided in [13].

2.2.3. Mixed Approaches

Some successful proposals of lidar imagers have mixed the two imaging modalities presented above, that is, they have combined some scanning approach together with some multiple detector arrangement. The cylindrical geometry enabled by a single rotating axis combined with a vertical array of emitters and detectors has been very successful [84] to parallelize the conventional point scanning approach both in emission and detection. These so-called spinning lidars increase data and frame rate while enabling for each detector a narrow field of view, which makes them very efficient energetically and, thus, able to reach long distances. A comparable approach has also been developed for flash lidars with an array of detectors and multiple beams mounted on a rotating axis [85]. These spinning approaches obviously demand line-shaped illumination of the scene. They may enable 360 deg vision of the FOV, in difference with MEMS or flash approaches which only retrieve the image within the predefined FOV of the unit.
Another interesting and successful mixed approach has been to use 1D MEMS mirrors for scanning a projected line onto the target, which is then recovered by either cylindrical optics on a 2D detector array, or onto an array of 1D detectors [52]. This method significantly reduces the size and weight of the unit while enabling large point rates, and provides very small and efficient sensors without macroscopic moving elements.

2.2.4. Summary

Single-point measurement strategies used for lidar need imaging strategies to deliver the desired 3D maps of the surroundings of the vehicle. We divided them into scanners and detector arrays. A summary of the keypoints described is provided in Table 2.
While mechanical scanners are now prevalent in the industry, the presence of moving elements supposes a threat to the duration and reliability of the unit in the long term, due to sustained shock and vibration conditions. MEMS scanners appear to be the alternative as they are lightweight, compact units with resonance frequencies well above those typical in a vehicle. They can scan large angles using adapted optical systems and use little power. However, they present problems related to the linearity of the motion and heat dissipation. OPAs would be the ideal solution, providing static beam steering, but they still present problems in the management of large power values, which set them currently in the research level. However, OPAs are a feasible alternative for the future in long-range, and currently they are commercialized in short and medium-range applications.
On the detector array side, flash lidars provide a good solution without any moving elements, especially in the close and medium ranges, where they achieve comparable performance figures to scanning systems. Large spatial resolution using InGaAs detectors is, however, expensive; and the use of detectors in Geiger mode poses issues related to the presence of retroreflective signs which can temporarily blind the sensor. The AMCW approach, as described, is based on detector arrays but is not reliable enough in outdoor environments. Intermediate solutions combining scanning using cylindrical geometries (rotation around an axis) with line illuminators and linear detector arrays along a spinning axis have also been proposed, both using the scanning and the flash approaches.
We do not want to finish this Section without mentioning an alternative way of classifying lidar sensors which by now should be obvious to the reader. We preferred an approach based on the components utilized for building the images, as it provides an easier connection with the coming Sections. However, an alternative classification of lidars based on how they illuminate the target would have also been possible, dividing them into those who illuminate the target point by point (scanners), as 1D lines (like the cylindrical approach just described in Section 2.2.3) or as a simultaneous 2D flow (flash imagers or TOF cameras). Such classification also enables us to include all the different families of lidar imaging systems. An equivalent classification based on the detection strategy (single detector, 1D array, 2D array) would also have been possible.

3. Sources and Detectors for Lidar Imaging Systems in Autonomous Vehicles

Lidar systems illuminate a scene and use the backscattered signal that returns to a receiver for range-sensing and imaging. Thus, the basic system must include a source or transmitter, a sensitive photodetector or receiver, the data processing electronics, and a strategy to obtain the information from all the target, essential for the creation of the 3D maps and/or proximity data images. We have just discussed the main imaging strategies, and the data processing electronics used are usually specialized regarding their firmware or code, but there is not really a choice of technology specific for lidar for these components in general. Data processing and device operation are usually managed by variable combinations of field programmable gate arrays (FPGAs), digital signal processors (DSPs), microcontrollers or even computers depending on the system architecture. Efforts oriented to dedicated chipsets for lidar are emerging and have reached the commercial level [86], although they still need of dedicated sources and detectors.
Thus, we believe the inclusion of a dedicated section to briefly review the particularities of sources and detectors used in lidar imaging systems is required to complete this paper. Due to the strong requirements on frame rate, spatial resolution and SNR imposed to lidar imaging systems, light sources and photodetectors become key state-of-the-art components of the unit, subject to strong development efforts, and which have different trade-offs and performance limits which affect the overall performance of the lidar setup.

3.1. Sources

Lidars usually employ lasers sources with wavelengths from the infrared region, typically from 0.80 to 1.55 μ m, to take advantage of the atmospheric transmission window, and in particular of water at those wavelengths [87], while enabling the use of beams not visible to the human eye. The sources are mainly used in three regions: a waveband from 0.8 μ m to 0.95 μ m, dominated by diode lasers which may be combined with silicon-based photodetectors; lasers at 1.06 μ m, still usable for Si detectors and usually based on fibre lasers, and lasers at 1.55 μ m, available from the telecom industry, which need InGaAs detectors. However, other wavelengths are possible as the lidar detection principle is general and works in most cases regardless of the wavelength selected. Wavelength is in general chosen taking into account considerations related to cost and eye safety. 1.55 μ m, for instance, enables much more power within the eye safety limit defined by Class 1 [88] but may become really expensive if detectors need to be above the conventional telecom size (200 μ m).
Thus, sources are based either on lasers or on nonlinear optical systems driven by lasers, although other sources may be found (such as LEDs), usually for short-range applications. Different performance features must be taken into account in order to select the most suitable source according to the system purpose. This includes, as the most relevant for automotive, peak power, pulse repetition rate (PRR), pulse width, wavelength (including purity and thermal drift), emission (single-mode, beam quality, CW/pulsed), size, weight, power consumption, shock resistance and operating temperature. In one very typical feature of lidar imaging systems, such performance features involve trade-offs; for instance, a large peak power typically goes against a large PRR or spectral purity.
Currently, the most popular sources for use in lidar technology are solid-state lasers (SSL) and diode lasers (DLs), with few exceptions. SS lasers employ insulating solids (crystals, ceramics or glasses) with elements added (dopants) that provide the energy levels needed for lasing. A process called optical pumping is used to provide the energy for exciting the energy levels and creating the population inversion required for lasing. Generally, SSLs use DLs as pumping sources to create the population inversion needed for lasing. The use of SSLs for performing active-sensing for range-finding started in 1960 with the SS ruby laser, thanks to the development of techniques to generate pulses with a duration of nanoseconds. Since then, almost every type of laser developed has been employed in demonstrations of rangefinders. SSLs can be divided into a wide range of categories, for example bulk or fiber, with this latter increasingly becoming prevalent in lidar for autonomous vehicles, not only because of its efficiency and capacity for generation of high average powers with high beam quality and PRR, but also because it provides an easy way to mount and align the lidar using free-space optics.
Beyond fiber lasers, increasing attention has been devoted to microchip lasers. They are, perhaps, the ultimate in miniaturization of diode-pumped SSLs. Its robust and readily mass-produced structure is very attractive. Their single-frequency CW performance and their excellence at generating sub-nanosecond pulses make microchip lasers well suited for many applications, lidar being among them. They need free-space alignment, unlike fibers, but can also achieve large peak energies with excellent beam quality. A more detailed description of all these laser sources, and of some others less relevant to lidar, can be found in [45].
Finally, semiconductor DLs are far more popular in the industry due to their lower cost and their countless applications beyond lidar. They are considered to be a separate laser category, although they are also made from solid-state material. In a DL, the population inversion leading to lasing takes place in a thin layer comprised between a semiconductor PN junction, named the depletion zone. Under electrical polarization, the recombination of electrons and holes in the depletion zone produces photons that get confined inside the region. This provides lasing without the need of an optical pump, and it is the reason why DLs can directly convert electrical energy into laser light output, making them very efficient. Different configurations are available for improved performance, including Fabry–Perot cavities, vertical-cavity surface-emitting lasers, distributed feedback lasers, etc. [89].

Sources in Lidar Imaging Systems

(a) Fiber lasers
Fiber lasers use optical fibers as the active media for lasing. The optical fiber is doped with rare earth elements (such as Er + 3 ) and one or several fiber-coupled DLs are used for pumping. In order to turn the fiber into a laser cavity, some kind of reflector (mirror) is needed to form a linear resonator or to build a fiber ring architecture (Figure 9). For commercial products, it is common to use a Bragg grating at the edge of the fiber which allows to reflect back a part of the photons. Although, in broad terms, the gain media of fiber lasers is similar to that of SS bulk lasers, the wave-guiding effect of the fiber and the small effective mode area usually lead to substantially different properties. For example, fiber lasers often operate with much higher laser gain and resonator losses than SSLs [90,91,92,93].
Fiber lasers can have very long active regions, so they can provide very high optical gain. Currently, there are high-power fiber lasers with outputs of hundreds of watts, sometimes even several kilowatts (up to 100 kW in continuous-wave operation) from a single fiber [94]. This potential arises from two main reasons: on one side, from a high surface-to-volume ratio, which allows efficient cooling because of the low and distributed warming; on the other side, the guiding effect of the fiber avoids thermo-optical problems even under conditions of significant heating [95]. The fiber’s waveguiding properties also permit the production of a diffraction-limited spot, that is, the smallest spot possible due to the laws of Physics, with very good beam quality, introducing a very small divergence of the beam linked only to the numerical aperture of the fibre, which stays usually within 0.1 deg values [96,97]. This divergence is proportional to the footprint of the beam on the target, and thus is directly related to the spatial resolution of the lidar image.
Furthermore, due to the fact that light is already coupled into a flexible fiber, it is easily sent to a movable focusing element allowing convenient power delivery and high stability to movement. Moreover, fiber lasers also enable configurable external trigger modes to ease the system synchronization and control and have a compact size because the fiber can be bent and coiled to save space. Some other advantages include its reliability, the availability of different wavelengths, their large PRR (>0.5 MHz is a typical value) and their small pulse width (<5 ns). Fiber lasers provide a good balance between peak power, pulse duration and pulse repetition rate, very well suited to the lidar specifications. Nowadays, its main drawback is its cost, which is much larger than other alternatives [45].
(b) Microchip lasers
Microchip lasers are bulk SS lasers with a piece of doped crystal or glass working as the gain medium. Its design allows optical pump power to be transmitted into the material, as well as to spatially overlap the generated laser power with the region of the material which receives the pump energy (Figure 10). The ability to store large amounts of energy in the laser medium is one of the key advantages of bulk SS lasers and is unique to them. A saturable absorber (active or passive) induces high losses in the cavity until the energy storage saturates (passive) or is actively liberated by means of a shutter (active Q-switch) [98,99].
The active medium of microchip lasers is a millimeter-thick, longitudinally pumped crystal (such as Nd:YAG) directly bonded to solid media that have saturable absorption at the laser wavelength. This transient condition results in the generation of a pulse containing a large fraction of the energy initially stored in the laser material, with a width that is related to the round-trip time of light in the laser cavity. Due to the bulk energy storage combined with their short laser resonator which leads to very short round-trip time, microchip lasers are well suited to the generation of high power short pulses, well beyond conventional lidar specs. Simple construction methods are enough to obtain short pulses in the nanosecond-scale (typical <5 ns). Q-switched microchip lasers may also allow the generation of unusually short pulses with a duration below 1 ns, although in lidar this poses problems in the bandwidth of the amplification electronics at the detector. Particularly with passive Q-switching, it is possible to achieve high PRR in the MHz region combined with short pulses of few ns, very well suited to present lidar needs. For lower repetition rates (around 1 KHz), pulse energies of some microjoules and pulse duration of a few nanoseconds allow for large peak powers (>1 kW). Beam quality can be very good, even diffraction-limited [100,101,102,103].
Microchip lasers provide an acceptable balance between peak power and pulse duration in a compact and cost-effective design in scale production. They have found their own distinctive role, which makes them very suitable for a large number of applications. Many of these applications benefit from a compact and robust structure and small electric power consumption. In other cases, their excellence at generating sub-nanosecond pulses and/or the possible high pulse repetition rates are of interest. For example, in laser range-sensing, it is possible to achieve a very high spatial resolution (down to 10 cm or less) due to the short pulse duration [104,105]. The cost stays midway between DL and fiber lasers, although closer to the level of fiber lasers.
(c) Diode lasers
There are two major types of DLs: interband lasers, which use transitions from one electronic band to another; and quantum cascade lasers (QCL), which operate on intraband transitions relying on an artificial gain medium made possible by quantum-well structures, built using band structure engineering. DLs are by far the most widespread in use, but recently, QCL have emerged as an important source of mid and long-wave infrared emission [4,45,106].
Interband DLs are electrically pumped semiconductor lasers in which the gain is generated by an electrical current flowing through a PN junction or (more frequently) with a P doped-Insulator-N doped (PIN) structure to improve performance. A PIN junction properly polarized makes the I-region the active media expanding the depletion zone, where carriers can recombine releasing energy as photons, and at the same time acting as a waveguide for the generated photons. The goal is to recombine all electrons and holes to create light confined by edge mirrors of the I region, one of them partially reflective so the radiation can escape from the cavity. This process can be spontaneous, but it can also be stimulated by incident photons effectively leading to optical amplification and to laser emission [89,107,108,109].
Interband lasers can be presented in a number of geometries that correspond to systems operating in very different regimes of optical output power, wavelength, bandwidth, and other properties. Some examples include edge-emitting lasers (EEL, Figure 11), vertical-cavity surface-emitting lasers (VCSEL), distributed feedback lasers (DFB), high-power stacked diode bars or external cavity DL. Many excellent reviews exist discussing the characteristics and applications of each of these types of laser [45,110,111,112,113,114].
Regarding lidar applications, DLs have the advantage of being cheap and compact, two key properties in autonomous vehicle applications. In particular, DLs are extremely good regarding their cost against peak power figure of merit. Although they provide less peak power than fiber or microchip lasers, it is still enough for several lidar applications (maximum peak power may be up to the 10 kW range in optimal conditions) [115]. However, they become limited by a reduced pulse repetition rate (∼100 kHz) and an increased pulse width (∼100 ns). Furthermore, high-power lasers are usually in the EEL configuration, yielding a degraded beam quality with a fast and a slow axis diverging differently, which negatively affects the laser footprint and the spatial resolution. However, they are currently used in almost every lidar, either as a direct source or as optical pumps in SSLs [116].
Table 3 presents a brief summary of the key characteristics of the different laser sources described when used in lidar applications.

3.2. Photodetectors

Along with light sources, photodetectors are the other main component of a lidar system that has dedicated features. Photodetectors are the critical photon sensing device in an active receiver which enables the TOF measurement. It needs to have a large sensitivity for direct detection of intensity, and it also has to be able to detect short pulses, thus requiring high bandwidth. A wide variety of detectors are available for lidar imaging, ranging from single-element detectors to arrays of 2D detectors, which may build an image with a single pulse [117].
The material of the detector defines its sensitivity to the wavelength of interest. Currently, Si-based detectors are used for the waveband between 0.3 μ m and 1.1 μ m, while InGaAs detectors are used above 1.1 μ m, although they have acceptable sensitivities from 0.7 μ m and beyond [118]. InP detectors and InGaAs/InP heterostructures have also been proposed as detectors in the mid-infrared [119], although their use in commercial lidar systems is rare due to their large cost if outside telecommunications standards and, eventually, the need for cooling them to reduce their noise figure.
Light detection in lidar imaging systems usually accounts for five different types of detectors: PIN diodes, avalanche photodiodes (APD), single-photon avalanche photodiodes (SPAD), multi-pixel photon counters (MPPC) and, eventually, photomultiplier tubes (PMT). Each one may be built from the material which addresses the wavelength of interest. The most used single-detector are PIN photodiodes, which may be very fast detecting events in light if they have enough sensitivity for the application, but do not provide any gain inside of their media, so in optimal efficiency conditions, each photon creates a single photoelectron. For applications that need moderate to high sensitivity and can tolerate bandwidths just below the GHz regime typical of PIN diodes, avalanche photodiodes (APDs) are the most useful receivers, since their structure provides a certain level of multiplication of the current generated by the incident light. APDs are able to internally increase the current generated by the photons incident in the sensitive area, providing a level of gain, usually around two orders of magnitude. In fact, their gain is proportional to the reverse bias applied, so they are linear devices with adjustable gain which provide a current output proportional to the optical power received. Single-photon avalanche diodes (SPAD) are essentially APDs biased beyond the breakdown voltage, and with their internal structure arranged to repetitively withstand large avalanche events. Whereas in an APD a single photon can produce in the order of tens to few hundredths of electrons, in a SPAD a single photon produces a large electron avalanche of thousands of electrons which results in detectable photocurrents. An interesting arrangement of SPADs has been proposed recently with MPPC, which are pixelated devices formed by an array of SPADs where all pixel outputs are added together in a single analog output, effectively enabling photon-counting through the measurement of the fired intensity [45,120,121,122]. Finally, PMTs are based on the external photoelectric effect and the emission of electrons within a vacuum tube which brings them to collision with cascaded dynodes, resulting in a true avalanche of electrons. Although they are not solid-state, they still provide the largest gains available for single-photon detection and are sensitive to UV, which may make them useful in very specific applications. Their use in autonomous vehicles is rare, but they have been the historical detector of reference in atmospheric or remote sensing lidar, so we include them here for completeness.
Several complete overviews of photodetection, photodetector types, and their key performance parameters may be found elsewhere and go beyond the scope of this paper [45,123,124,125,126]. Here we will focus only on main photodetectors used in lidar imaging. Due to the relevance of the concepts of gain and noise, where SNR ratios in detection are usually small, a brief discussion on these concepts will be introduced here before getting into the detailed description of the different detector families of interest.

3.2.1. Gain and Noise

As described above, the gain is a critical ability of a photodetector in low SNR conditions as it enables us to increase the available signal from an equivalent input. Gain increases the power or amplitude of a signal from the input (in lidar, the initial number of photoelectrons generated by the incoming photons which are absorbed) to the output (the final number of photoelectrons sent to digitization) by adding energy to the signal. It is usually defined as the mean ratio of the output power to the input signal, so a gain greater than 1 indicates amplification. On the other hand, noise includes all the unwanted, irregular fluctuations introduced by the signal itself, the detector and the associated electronics which accompany the signal and perturb its detection by obscuring it. Photodetection noise may be due to different parameters well described in [127].
Gain is a relevant feature of any photodetector in low SNR applications. Conventionally, a photon striking the detector surface has some probability of producing a photoelectron, which in turn produces a current within the detector which usually is converted to voltage and digitized after some amplification circuitry. The gain of the photodetector dictates how many electrons are produced by each photon that is successfully converted into a useful signal. The effect of these detectors in the signal-to-noise ratio of the system is to add a gain factor G to both the signal and certain noise terms, which may also be amplified by the gain.
Noise is, in fact, a deviation of the response to the ideal signal; so, it is represented by a standard deviation ( σ n o i s e ). A combination of independent noise sources is then the accumulation of its standard deviations, which means that a dominant noise term is likely to appear:
σ n o i s e = σ s h o t 2 + σ t h 2 + σ b a c k 2 + σ r e a d 2 +
Thermal noise σ t h and shot noise σ s h o t are considered fixed system noise sources. The first is due to the thermal motion of the electrons inside the semiconductor, which adds electrons to the output current which are not related to the incident optical power. Shot noise is related to the statistical fluctuations in the optical signal itself and the statistical interaction process with the detector. Shot noise is relevant only in very low light applications, where the statistics of photon arrival become observable. Other noise sources are considered as external, such as background noise σ b a c k , readout noise σ r e a d , or speckle noise σ s p e c k l e . They respectively appear as a consequence of the existence of background illumination in the same wavelength of the laser pulses, fluctuations in the generation of photoelectrons or its amplification, or the presence of speckle fluctuations in the received laser signal. While fixed system noise will not be affected by the gain of the detector, background, readout and speckle noise will also be amplified by the gain, which may become counter-productive for detection. When either thermal or shot noise are the dominant noise source, we are said to be in the thermal or shot noise regimes of a detector, and the existence of gain significantly improves the SNR value.
The SNR for a detector with gain may then be written as:
S N R G a i n G · N s i g n a l σ s h o t 2 + σ t h 2 + G 2 · σ b a c k 2 + G 2 · σ r e a d 2 + .
where N s i g n a l is the number of photoelectrons generated by real incoming photons. Equation (9) shows the advantage to work in thermal or shot regime regarding gain.

3.2.2. Photodetectors in Lidar Imaging Systems

(a) PIN photodiodes
A PIN photodiode is a diode with a wide, undoped intrinsic semiconductor region between a p- and a n-doped regions. When used as a photodetector, the PIN diode is reverse-biased. Under this condition the diode is not a conductor, but when a photon with enough energy enters the depletion region it creates an electron-hole pair. The field of the reverse bias then sweeps the carriers out of the region creating a current proportional to the number of incoming photons. This gives rise to a photocurrent sent to an external amplification circuit. The depletion region stays completely within the intrinsic region, and it is much larger than in a PN diode and almost constant-sized, regardless of the reverse bias applied to the diode.
When using PIN diodes as receivers, most of photons are absorbed in the I-region, and carriers generated therein can efficiently contribute to the photocurrent. They can be manufactured with multiple different materials including Si, InGaAs, CdTe,… yielding wide options regarding their spectral response, although the most usual single-element detectors used are based on Si and InGaAs. PIN photodiodes do not have any gain (G = 1) (Figure 12), but may present very large bandwidths (up to 100 GHz, depending on its size and capacitance), dimensions of millimeters, low cost, low bias and large QE. However, due to their features their sensitivity is not enough for low light applications. In lidar imaging systems, they are typically used as detectors in pulsed lidars to raise the start signal for the time-to-digital converter at the exit of the laser pulse.
(b) Avalanche photodiodes
An avalanche photodiode (APD) is a diode-based photodetector with an internal gain mechanism operated with a relatively high reverse bias voltage (typically tens or even hundredths of volts), sometimes just below the breakdown of the device, which happens when a too-large reverse bias is applied. As with a conventional photodiode detector, the absorption of incident photons generates a limited number of electron-hole pairs. While under high bias voltage, a strong internal electric field is created which accelerates the carriers generated and creates additional secondary electrons, in this case by impact ionization. The resulting electron avalanche process, which takes place over a distance of only a few micrometers, can produce gain factors up to a few hundredths but is still directly proportional to the incoming optical power and to the reverse bias. The amplification factor of the APD dictates the number of photoelectrons that are created with the successful detection of each photon, and, thus, the effective responsivity of the receiver. Gain may vary from device to device and strongly depends on the reverse voltage applied. However, if the reverse voltage is increased further, a voltage drop occurs due to the current flowing through the device load resistance. This means that the value of the maximum gain has a dependence on the photocurrent. Although there is a large linear region of operation where the output photocurrent presents gain proportional to the power of the incoming light, when the APD is operated near its maximum gain (and thus close to the breakdown voltage) the APD response is not linear anymore, and then the APD is said to operate in Geiger mode. The level of gain required to obtain the optimal SNR value will often be dependent on the amount of incident light (Figure 12). Several excellent reviews of APDs in detail are available, e.g., [128].
APDs are very sensitive detectors. However, the avalanche process itself creates fluctuations in the generated current and thus noise, which can offset the advantage of gain in the SNR. The noise associated with the statistical fluctuations in the gain process is called excess noise. Its amount depends on several factors: the magnitude of the reverse voltage, the properties of the material (in particular, the ionization coefficient ratio), and the device design. Generally speaking, when fixed system noise is the limiting noise factor the performance of APDs is much better than devices with ordinary PIN photodiodes. However, increasing gain also increases the excess noise factor, so there exists an optimal operating gain for each operating condition, usually well below the actual maximum gain, where the maximum SNR performance can be obtained.
Linear-mode APDs (in contrast to Geiger-mode APDs or SPADs, described next) present output signals amplified by a gain and proportional to the incoming light. Compared to PIN photodiodes, they have comparable bandwidth, but can measure lower light levels, and thus may be used in a variety of applications requiring high sensitivity such as long-distance communications, optical distance measurements, and obviously for lidar. However, they are not sensitive enough for single-photon detection. They are a mature and widely available technology, so APDs are also available in array form, in multiple sizes, in either 1D or 2D arrays, with photosensitive areas up to 10 × 10 mm, especially in Si. Large InGaAs arrays, on the contrary, are hard to find and prohibitively priced at present.
(c) Single-photon avalanche photodiodes
APDs working in Geiger-mode are known as single-photon avalanche diodes (SPADs). SPADs are operated slightly above the breakdown threshold voltage (Figure 12), the electric field being so high that a single electron–hole pair injected into the depletion layer can trigger a strong, self-sustained avalanche. The current rises swiftly to a macroscopic steady level and it keeps flowing until the avalanche can be q u e n c h e d , meaning it is stopped and the SPAD is operative again. Under these circumstances, the photocurrent is not linearly amplified, but rather a standard final current value is reached regardless if it has been triggered by only one or by several incident photons. The design of the device architecture needs to be prepared for repeated avalanches without compromising the response of the detector. The structure of SPADs is different from those of linear mode APDs in order to withstand repeated avalanches and have efficient and fast quenching mechanisms.
For effective Geiger-mode operation, the avalanche process must be stopped and the photodetector must be brought back into its original quiescent state. This is the role of the quenching circuit. Once the photocurrent is triggered, the quenching circuit reduces the voltage at the photodiode below the breakdown voltage for a short time, so the avalanche is stopped. After some recovery time, the detector restores its sensitivity and is ready for the reception of further photons. Such a dead-time constitutes a substantial limitation of these devices, as it limits the count rate and leaves the device useless for times in the 100 ns scale, severely limiting its bandwidth. This is being tackled through improved quenching circuits. Currently, two types of quenching are in use: passive, in which avalanche is interrupted by lowering the bias voltage below breakdown using a high-value resistor; and active, based on active current feedback loops. Active quenching was specifically devoted to overcoming the slow recovery times characteristic of passive quenching. The rise of the avalanche is sensed through a low impedance and a reaction back on the device is triggered by controlling the bias voltage using active components (pulse generators or fast active switches) that force the quenching and reset transitions in shorter times [129]. Currently, active quenching is a very active research line, due to its relevance in low light detection in several imaging applications [130].
The macroscopic current generated due to the avalanche is discernible using electronic threshold detection. Since threshold detection is digital, it is essentially noiseless, although there exist different mechanisms that can also fire the avalanche process generating noise. The main sources of false counts in SPADs are thermally-generated carriers and afterpulsing [131]. The first case is due to the generation-recombination processes within the semiconductor as a result of thermal fluctuations, which may induce the avalanche and produce a false alarm. In the second case, during the avalanche, some carriers are captured by deep energy levels in the junction depletion layer and subsequently released with a statistically fluctuating delay. Released delayed carriers can retrigger the avalanche generating these after-pulses, an effect that increases with the delay of avalanche quenching and with the current intensity.
SPADs are, however, extremely efficient in low light detection, and can be used when an extremely high sensitivity at the single-photon detection level is required. Devices with optimized amplifier electronics are also available in the CMOS integrated form, even as large detector arrays, in applications from quantum optics to low-light biomedical imaging. The intensity of the signal may be obtained by repeated illumination cycles counting the number of output pulses received within a measurement time slot. Statistical measurements of the time-dependent waveform of the signal may be obtained by measuring the time distribution of the received pulses, using the time-correlated single-photon counting (TCSPC) technique [21]. SPADs are used in a number of lidar applications and products, taking advantage of its extreme sensitivity, and may be found individually, or in 1D or 2D arrays. Their main drawback is their sensitivity to large back reflections which may saturate the detector and leave it inoperative for short periods, an event easily found in real life where large retroreflective signs are present almost everywhere on the road as traffic signs, as discussed in Section 2.2.2.a when speaking of flash imagers.
(d) Multipixel photon counters
SPADs are very efficient in the detection of single photons, as they provide a digital output in the presence of one or more photons. The signal obtained when detecting one or several photons is thus equivalent, which is a drawback in several applications. Multipixel photon counters (MPPCs), also known as silicon photomultipliers (SiPMs), are essentially SPAD arrays with cells of variable size that recombine the output signal of each individual SPAD into a joint analog signal [132,133]. In such a fashion, the analog signal is proportional to the number of SPADs triggered, enabling photon-counting beyond the digital on/off photon detection capability presented by SPADs.
Each microcell in an MPPC consists of a SPAD sensor with its own quenching circuit. When a microcell triggers in response to an absorbed photon, the Geiger avalanche causes a photocurrent to flow through the microcell. The avalanche is confined to the single-pixel where it was initiated while all other microcells remain fully charged and ready to detect photons. When a photon is detected, the receiving unit in the array outputs a single pulse with a fixed amplitude that does not vary with the number of photons entering the unit at the same time. The output amplitude is equal for each of the pixels. Although the device works in digital mode pixel by pixel, MPPCs become analog devices as all the microcells are read in parallel and each of the pulses generated by multiple units is superimposed onto each other to obtain the final photocurrent. As a drawback, linearity gets worse as more photons are incident on the device, because the probability for more than one photon hitting the same microcell increases. Further, as an array, the potential for crosstalk and afterpulsing between cells may be significant depending on the application [134]. Optical crosstalk occurs when a primary avalanche in a microcell triggers secondary discharges in one or more adjacent microcells, i.e., the unit that actually detects photons affects other pixels making them produce pulses that make the output signal higher than that implied by the amount of the incident light. Its probability depends on fixed factors (like the size of the microcell and the layered architecture) and on variable ones (like the difference between the bias voltage being applied and the breakdown voltage).
A typical MPPC has microcell densities of between 100 and several 1000 per mm 2 , depending upon the size of the unit. Its characteristics are greatly connected with the operating voltage and ambient temperature. In general, raising the reverse voltage increases the electric field inside the device and so, improves the gain, photon detection efficiency and time resolution. On the other hand, it also increases undesired components that lower the SNR, such as false triggers due to thermal noise and afterpulsing. Thus, the operating voltage must be carefully set in order to obtain the desired characteristics.
Despite these practicalities, MPPCs have many attractive features including very high gains (about 10 6 ), analog photon-counting capabilities, a wide number of available commercial sizes (they are even modular so they can be attached next to each other), and lower operation voltage and power consumption values than the ones required for conventional PMTs. Due to its characteristics, it is a useful device for most low light applications, in particular for lidar or in any single photon application where solid-state detectors are an advantage related to PMTs.
(e) Photomultiplier tubes
As a final detector in the series, photomultiplier tubes (PMTs) have played, and still play, a relevant role in several applications, including atmospheric lidar for remote sensing. They have been compared to MPPC detectors in atmospheric lidar showing comparable performance [135].
PMTs are based on the external photoelectric effect. A photon incident onto a photosensitive area within a vacuum tube that extracts a photoelectron from the material. Such a photoelectron is accelerated to impact onto a cascaded series of electrodes named dynodes, where more electrons are generated by ionization at each impact creating a cascaded secondary emission. Each photoelectron generated is multiplied in cascade enabling again single-photon detection. It is possible to obtain gains up to 10 8 at MHz rate [136]. They have dominated the single-photon detection scene for many years, especially in scientific and medical applications. They are still the only photodetectors with a decent response and gain in the UV region, present unrivaled gain at all wavelengths and their rise times are in the ns scale, so their bandwidth is very large (>1 GHz). However, PMTs are bulky, fragile devices which are not solid-state, and which are affected by magnetic fields, which strongly limits their applicability in autonomous vehicles. Other disadvantages include the requirement for a high-voltage supply, the high cost and, in some cases, their low quantum efficiency.
Table 4 presents a brief summary of the main features of the photodetectors we have just described.

4. Pending Issues

While the deployment of lidar imaging systems for autonomous vehicles seems unstoppable, and major automotive manufacturers are starting to select providers for data collection units and introducing them in commercial vehicles, the final technology implementation is still uncertain in several relevant details. The selection of the most appropriate technology and scanning strategy among the different competing alternatives in terms of cost and functionality still needs in-depth work, and becomes one of the most visible examples of the current uncertainty in the industry. However, beyond the final technology of choice, there are still several relevant issues that need to be worked out for the full implementation of lidar imaging systems in commercial vehicles. Here we present a list of pending issues to be solved, in particular considering automotive. A complete list of all potential pending issues is impossible to compile, but we propose what seems to us some very relevant problems for lidar deployment, most of them general to all types of measurement principles and implementations.

4.1. Spatial Resolution

Dense point clouds with a large spatial resolution both in the horizontal and vertical directions are one of the cornerstones of object detection. While detectivity at long range has been achieved even for objects with low reflectivities, the reliable identification of objects at significant distances needs enhancement, as shown by the trend in larger and larger spatial resolution [137]. A rule-of-thumb number says objects above 10 cm can hit the bumper of some cars, and braking or evasion at highway speeds needs at least detection of hazards at 150 m. This brings on a spatial resolution of 0.67 mrad for detecting a point on the object. If five to ten points are considered for reliable identification of an object, then spatial resolution in all directions needs to be reduced to just 67 μ rad, a number which is really demanding. Although cumulative detection procedures, statistics and machine learning may help to improve detection [138], it is still clear that larger spatial resolutions, especially along the vertical axis, will be required if autonomous vehicles need to drive at fast speeds on highways. Approaches with a spatial resolution close to 1 mrad in the horizontal and vertical directions preserving real-time operation start to be available (Figure 13) [68].

4.2. Sensor Fusion and Data Management

Despite lidar being the sensor in the headlines, a complex sensor suite involving a number of sensors is expected to be required. Such sensors need to have complementary measurement principles and failure modes to get a safe and reliable solution for fully autonomous cars [139]. In general, short and long-range radar and lidar, combined with ultrasound and vision cameras are generally accepted as parts of the potential final solution. Such amounts of information, without including the high-density point clouds mentioned above, need to be fused and processed in real-time to detect hazards and react timely to them [140]. This information has been estimated to be somewhere between 11TB and 140 TB per day, with bandwidths in the range of 19 TB/h to 40 TB/h [141], which becomes a relevant storage, processing and management problem by itself. Regarding sensor fusion procedures, they may be dependent on the operative conditions of the moment, but even with this assumption, processes are not obvious, involving different approaches for different lidar principles: fusing information from a camera and a mechanical scanning lidar covering 360 deg is harder than in the limited FOV of a voice coil or MEMS scanner. Such procedures are not as obvious as parallax errors are prone to appear due to the different position and geometry of the sensors, while posing demanding requirements on the computing power of the vehicle in any case, even for embedded solutions.

4.3. Sensor Distribution

If the components of the sensor suite are still under definition and in an early stage within the production cycle for manufacturing, its potential distribution along the vehicle is not a simple decision to take. The number of sensors reasonably mounted on the self-driving car is estimated to be somewhere between 20 and 40 [141]. Further, data collection and machine learning procedures may become affected by relevant changes in the pose of the sensor, its position and field of view, in the worst cases forcing the tedious data collection process to be started again. It is generally accepted that the current approach is taken for robotaxis in controlled environments or data collection vehicles, with lidars and sensors fixed on the roof of the vehicle, which is not acceptable for commercial units. Where to place the lidar, if not on the roof, has relevant implications related to covered FOV and the number of units required, with the associated cost consequences. The decision has relevant implications also on the vehicle itself, where currently not much free space is available for sensors with dimensions of current units proposed in the 10–20 cm range in all dimensions. Further, the distribution and position of the sensors are key for aspects such as reliability, dirt management, servicing or survival of the unit after minor crashes. Despite the most usual approach is to embed the lidar in the central front panel of the vehicle, there are alternatives such as e.g headlamps as potential lodging of lidar sensors [44].

4.4. Bad Weather Conditions

Autonomous driving tests are, up to now, handled mostly in sunny environments, such as California or Texas. However, the quality of the detection under fog, rain and snow, especially if they are extreme, becomes severely degraded, especially regarding range [142] due to the absorption and scattering events induced by water droplets. This introduces a large number of false detection alarms from the backscattered intensity, reducing the reliability of the sensor [85,143]. Further, snowflakes, fog and rain droplets have different shapes, distributions and sizes and affect different aspects of the detection, complicating a precise modeling [144]. Managing lidar imaging in extreme weather conditions is a pending subject of the technology which needs to be tackled in the near future for commercial deployment of automated vehicles.

4.5. Mutual Interference

To close this section, let us consider a world full of self-driving cars each with its lidar imaging system emitting pulses or waves at high frequency. Imagine them in a traffic jam, or in an area with large vehicle density. The uniqueness of the signal emitted by each lidar needs to be ensured, so the source of one vehicle does not trigger the detection in other vehicles in their surroundings. Although some measurement principles may have advantages over the others, the implementation of discrimination patterns among each individual vehicle may be challenging [145]. For instance, FMCW detection appears to be better than direct pulse detection as the modulation frequency and amplitude, combined with the coherent detection implemented, may help to add personalized signatures to each lidar. However, the implementation of this concept at the mass scale needs to be carefully considered, and possibly supported by improved data processing to filter out false detections while preserving the reliability of the unit.

5. Conclusions

Lidar imaging systems are a novel type of sensor enabling complete 3D perception of the environment, and not just the conventional 2D projection obtained from a camera. Within this paper, we have tried to describe in detail the different configurations of lidar imaging systems available for autonomous vehicles. Despite the discussion becoming biased towards cars because of the rising activity in the field, similar considerations to the ones presented here may be stated for maritime or aerial vehicles. We reviewed and compared the three main working principles underlying all lidar measurements, to then overview the main strategies involved in imaging, grouped into scanners (mechanical, MEMS and OPAs) and detector arrays (flash and TOF approaches). Afterwards, we tried to overview the principal considerations related to sources and photodetectors used in lidar imaging systems units at present, showing its advantages and disadvantages. We finished with some of the most relevant pending issues which lidar imaging systems in vehicles need to overcome to become the reality everyone is expecting.
The desired goal of the paper was to introduce the topic and to somehow order the disparity of information delivered by lidar manufacturers, scientific papers, proceedings and market studies in the field, sometimes biased due to commercial needs and venture capital pressure. They are very exciting times for lidar imaging, a field where mechanics, optronics, software and robotics are merging to develop, potentially, the social revolution which autonomous vehicles may bring. Possibly, the less uncertain thing is that we can wait for several surprises along the way.

Author Contributions

Both authors contributed signifcantly to all parts of the manuscript. S.R. was more active in Section 1, Section 2 and Section 4 while M.B.-G. was in charge of Section 3 and the compilation of bibliography.

Funding

This research was funded by MINECO project FIS2017-89850R, from EU project H2020-826600-VIZTA, and from AGAUR Grant 2019FI-B-00868.

Acknowledgments

The authors wish to acknowledge the support from Jan-Erik Källhammer and Jordi Riu in the revision and elaboration of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Woodside Capital Partners & Yole Développement. Automotive LiDAR Market Report; OIDA Publications & Reports; Optical Society of America: Washington, DC, USA, 2018. [Google Scholar]
  2. Schoonover, D. The Driverless Car Is Closer Than You Think—And I Can’t Wait; Forbes: Randall Lane, NJ, USA, 2019. [Google Scholar]
  3. Comerón, A.; Muñoz-Porcar, C.; Rocadenbosch, F.; Rodríguez-Gómez, A.; Sicard, M. Current research in LIDAR technology used for the remote sensing of atmospheric aerosols. Sensors 2017, 17, 1450. [Google Scholar] [CrossRef] [PubMed]
  4. McManamon, P.F. Field Guide to Lidar; SPIE: Bellingham, WA, USA, 2015. [Google Scholar]
  5. Weitkamp, C. LiDAR: Introduction. In Laser Remote Sensing; CRC Press: Boca Raton, FL, USA, 2005; pp. 19–54. [Google Scholar]
  6. Dong, P.; Chen, Q. LiDAR Remote Sensing and Applications; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  7. Remote Sensing—Open Access Journal. Available online: https://www.mdpi.com/journal/remotesensing (accessed on 27 September 2019).
  8. Remote Sensing—Events. Available online: https://www.mdpi.com/journal/remotesensing/events (accessed on 27 September 2019).
  9. Douillard, B.; Underwood, J.; Kuntz, N.; Vlaskine, V.; Quadros, A.; Morton, P.; Frenkel, A. On the segmentation of 3D LIDAR point clouds. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 2798–2805. [Google Scholar]
  10. Premebida, C.; Carreira, J.; Batista, J.; Nunes, U. Pedestrian detection combining rgb and dense lidar data. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 4112–4117. [Google Scholar]
  11. Neumann, U.; You, S.; Hu, J.; Jiang, B.; Lee, J. Augmented virtual environments (ave): Dynamic fusion of imagery and 3d models. In Proceedings of the IEEE Virtual Reality, Los Angeles, CA, USA, 22–26 March 2003; pp. 61–67. [Google Scholar]
  12. Himmelsbach, M.; Mueller, A.; Lüttel, T.; Wünsche, H.J. LIDAR-Based 3D Object Perception; IRIT: Toulouse, France, 2008; p. 1. [Google Scholar]
  13. Kolb, A.; Barth, E.; Koch, R.; Larsen, R. Time-of-flight cameras in computer graphics. Comput. Gr. Forum 2010, 29, 141–159. [Google Scholar] [CrossRef]
  14. Han, J.; Shao, L.; Xu, D.; Shotton, J. Enhanced computer vision with microsoft kinect sensor: A review. IEEE Trans. Cybern. 2013, 43, 1318–1334. [Google Scholar]
  15. Darlington, K. The Social Implications of Driverless Cars; BBVA OpenMind: Madrid, Spain, 2018. [Google Scholar]
  16. Moosmann, F.; Stiller, C. Velodyne slam. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 393–398. [Google Scholar]
  17. Behroozpour, B.; Sandborn, P.A.; Wu, M.C.; Boser, B.E. Lidar system architectures and circuits. IEEE Commun. Mag. 2017, 55, 135–142. [Google Scholar] [CrossRef]
  18. Illade-Quinteiro, J.; Brea, V.; López, P.; Cabello, D.; Doménech-Asensi, G. Distance measurement error in time-of-flight sensors due to shot noise. Sensors 2015, 15, 4624–4642. [Google Scholar] [CrossRef]
  19. Sarbolandi, H.; Plack, M.; Kolb, A. Pulse Based Time-of-Flight Range Sensing. Sensors 2018, 18, 1679. [Google Scholar] [CrossRef] [PubMed]
  20. Theiß, S. Analysis of a Pulse-Based ToF Camera for Automotive Application. Master’s Thesis, University of Siegen, Siegen, Germany, 2015. [Google Scholar]
  21. O’Connor, D. Time-Correlated Single Photon Counting; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  22. Koskinen, M.; Kostamovaara, J.T.; Myllylae, R.A. Comparison of continuous-wave and pulsed time-of-flight laser range-finding techniques. In Proceedings of the Optics, Illumination, and Image Sensing for Machine Vision VI, Anaheim, CA, USA, 1 March 1992; Volume 1614, pp. 296–305. [Google Scholar]
  23. Wehr, A.; Lohr, U. Airborne laser scanning—An introduction and overview. ISPRS J. Photogramm. Remote Sens. 1999, 54, 68–82. [Google Scholar] [CrossRef]
  24. Horaud, R.; Hansard, M.; Evangelidis, G.; Ménier, C. An overview of depth cameras and range scanners based on time-of-flight technologies. Mach. Vis. Appl. 2016, 27, 1005–1020. [Google Scholar] [CrossRef] [Green Version]
  25. Richmond, R.; Cain, S. Direct-Detection LADAR Systems; SPIE Press: Bellingham, WA, USA, 2010. [Google Scholar]
  26. Amann, M.C.; Bosch, T.M.; Lescure, M.; Myllylae, R.A.; Rioux, M. Laser ranging: A critical review of unusual techniques for distance measurement. Opt. Eng. 2001, 40, 10–20. [Google Scholar]
  27. Hansard, M.; Lee, S.; Choi, O.; Horaud, R.P. Time-of-Flight Cameras: Principles, Methods and Applications; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
  28. Gokturk, S.B.; Yalcin, H.; Bamji, C. A time-of-flight depth sensor-system description, issues and solutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June–2 July 2004; p. 35. [Google Scholar]
  29. Möller, T.; Kraft, H.; Frey, J.; Albrecht, M.; Lange, R. Robust 3D measurement with PMD sensors. Range Imaging Day Zürich 2005, 7, 906467(1-14)7. [Google Scholar]
  30. Lefloch, D.; Nair, R.; Lenzen, F.; Schäfer, H.; Streeter, L.; Cree, M.J.; Koch, R.; Kolb, A. Technical foundation and calibration methods for time-of-flight cameras. In Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications; Springer: Berlin, Germany, 2013; pp. 3–24. [Google Scholar]
  31. Lange, R.; Seitz, P. Solid-state time-of-flight range camera. IEEE J. Quantum Electron. 2001, 37, 390–397. [Google Scholar] [CrossRef]
  32. Foix, S.; Alenya, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef]
  33. Oggier, T.; Büttgen, B.; Lustenberger, F.; Becker, G.; Rüegg, B.; Hodac, A. SwissRanger SR3000 and first experiences based on miniaturized 3D-TOF cameras. In Proceedings of the First Range Imaging Research Day at ETH Zurich, Zurich, 2005. [Google Scholar]
  34. Petermann, K. Advances in Optoelectronics; Springer: Berlin, Germany, 1988. [Google Scholar]
  35. Jha, A.; Azcona, F.J.; Royo, S. Frequency-modulated optical feedback interferometry for nanometric scale vibrometry. IEEE Photon. Technol. Lett. 2016, 28, 1217–1220. [Google Scholar] [CrossRef]
  36. Agishev, R.; Gross, B.; Moshary, F.; Gilerson, A.; Ahmed, S. Range-resolved pulsed and CWFM lidars: Potential capabilities comparison. Appl. Phys. B 2006, 85, 149–162. [Google Scholar] [CrossRef]
  37. Uttam, D.; Culshaw, B. Precision time domain reflectometry in optical fiber systems using a frequency modulated continuous wave ranging technique. J. Lightw. Technol. 1985, 3, 971–977. [Google Scholar] [CrossRef]
  38. Aulia, S.; Suksmono, A.B.; Munir, A. Stationary and moving targets detection on FMCW radar using GNU radio-based software defined radio. In Proceedings of the IEEE International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Nusa Dua, Indonesia, 9–12 November 2015; pp. 468–473. [Google Scholar]
  39. Wojtkiewicz, A.; Misiurewicz, J.; Nalecz, M.; Jedrzejewski, K.; Kulpa, K. Two-dimensional signal processing in FMCW radars. In Proceeding of the XXth National Conference on Circuit Theory and Electronic Networks; University of Mining and Metallurgy: Kolobrzeg, Poland, 1997; pp. 475–480. [Google Scholar]
  40. Feneyrou, P.; Leviandier, L.; Minet, J.; Pillet, G.; Martin, A.; Dolfi, D.; Schlotterbeck, J.P.; Rondeau, P.; Lacondemine, X.; Rieu, A.; et al. Frequency-modulated multifunction lidar for anemometry, range finding, and velocimetry—1. Theory and signal processing. Appl. Opt. 2017, 56, 9663–9675. [Google Scholar] [CrossRef] [PubMed]
  41. Rasshofer, R.; Gresser, K. Automotive radar and lidar systems for next generation driver assistance functions. Adv. Radio Sci. 2005, 3, 205–209. [Google Scholar] [CrossRef]
  42. Williams, G.M. Optimization of eyesafe avalanche photodiode lidar for automobile safety and autonomous navigation systems. Opt. Eng. 2017, 56, 031224. [Google Scholar] [CrossRef]
  43. Duong, H.V.; Lefsky, M.A.; Ramond, T.; Weimer, C. The electronically steerable flash lidar: A full waveform scanning system for topographic and ecosystem structure applications. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4809–4820. [Google Scholar] [CrossRef]
  44. lThakur, R. Scanning LIDAR in Advanced Driver Assistance Systems and Beyond: Building a road map for next-generation LIDAR technology. IEEE Consum. Electron. Mag. 2016, 5, 48–54. [Google Scholar] [CrossRef]
  45. National Research Council. Laser Radar: Progress and Opportunities in Active Electro-Optical Sensing; National Academies Press: Washington, DC, USA, 2014. [Google Scholar]
  46. Montagu, J. Galvanometric and Resonant Scanners. In Handbook of Optical and Laser Scanning, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2016; pp. 418–473. [Google Scholar]
  47. Zhou, Y.; Lu, Y.; Hei, M.; Liu, G.; Fan, D. Motion control of the wedge prisms in Risley-prism-based beam steering system for precise target tracking. Appl. Opt. 2013, 52, 2849–2857. [Google Scholar] [CrossRef] [PubMed]
  48. Davis, S.R.; Farca, G.; Rommel, S.D.; Johnson, S.; Anderson, M.H. Liquid crystal waveguides: New devices enabled by > 1000 waves of optical phase control. In Proceedings of the Emerging Liquid Crystal Technologies V, San Francisco, CA, USA, 23–28 January 2010; SPIE: Bellingham, WA, USA, 2010; Volume 7618, p. 76180E. [Google Scholar]
  49. Han, W.; Haus, J.W.; McManamon, P.; Heikenfeld, J.; Smith, N.; Yang, J. Transmissive beam steering through electrowetting microprism arrays. Opt. Commun. 2010, 283, 1174–1181. [Google Scholar] [CrossRef] [Green Version]
  50. Akatay, A.; Ataman, C.; Urey, H. High-resolution beam steering using microlens arrays. Opt. Lett. 2006, 31, 2861–2863. [Google Scholar] [CrossRef] [PubMed]
  51. Ayers, G.J.; Ciampa, M.A.; Vranos, N.A. Holographic Optical Beam Steering Demonstration. In Proceedings of the IEEE Photonic Society 24th Annual Meeting, Arlington, VA, USA, 9–13 October 2011; pp. 361–362. [Google Scholar]
  52. Yoo, H.W.; Druml, N.; Brunner, D.; Schwarzl, C.; Thurner, T.; Hennecke, M.; Schitter, G. MEMS-based lidar for autonomous driving. Elektrotechnik Informationstechnik 2018, 135, 408–415. [Google Scholar] [CrossRef] [Green Version]
  53. Ullrich, A.; Pfennigbauer, M.; Rieger, P. How to read your LIDAR spec—A comparison of single-laser-output and multi-laser-output LIDAR instruments. In Riegl Laser Measurement Systems; GmbH Riegl: Salzburg, Austria, 2013. [Google Scholar]
  54. Baluja, S. Evolution of an Artificial Neural Network Based Autonomous Land Vehicle Controller. IEEE Trans. Syst. Man Cybern. Part B Cybern. 1996, 26, 450–463. [Google Scholar] [CrossRef] [PubMed]
  55. Jo, K.; Kim, J.; Kim, D.; Jang, C.; Sunwoo, M. Development of autonomous car—Part I: Distributed system architecture and development process. IEEE Trans. Ind. Electron. 2014, 61, 7131–7140. [Google Scholar] [CrossRef]
  56. Ackerman, E. Hail, robo-taxi! IEEE Spec. 2017, 54, 26–29. [Google Scholar] [CrossRef]
  57. Ataman, Ç.; Lani, S.; Noell, W.; De Rooij, N. A dual-axis pointing mirror with moving-magnet actuation. J. Micromech. Microeng. 2012, 23, 025002. [Google Scholar] [CrossRef]
  58. Ye, L.; Zhang, G.; You, Z. 5 V compatible two-axis PZT driven MEMS scanning mirror with mechanical leverage structure for miniature LiDAR application. Sensors 2017, 17, 521. [Google Scholar] [CrossRef]
  59. Schenk, H.; Durr, P.; Haase, T.; Kunze, D.; Sobe, U.; Lakner, H.; Kuck, H. Large deflection micromechanical scanning mirrors for linear scans and pattern generation. IEEE J. Sel. Top. Quantum Electron. 2000, 6, 715–722. [Google Scholar] [CrossRef]
  60. Stann, B.L.; Dammann, J.F.; Giza, M.M. Progress on MEMS-scanned ladar. In Proceedings of the Laser Radar Technology and Applications XXI; SPIE: Bellingham, WA, USA, 2016; Volume 9832, p. 98320L. [Google Scholar]
  61. Kim, G.; Eom, J.; Park, Y. Design and implementation of 3d lidar based on pixel-by-pixel scanning and ds-ocdma. In Smart Photonic and Optoelectronic Integrated Circuits XIX; SPIE: Bellingham, WA, USA, 2017; Volume 10107, p. 1010710. [Google Scholar]
  62. Holmström, S.T.; Baran, U.; Urey, H. MEMS laser scanners: A review. IEEE J. Microelectromech. Syst. 2014, 23, 259–275. [Google Scholar] [CrossRef]
  63. Urey, H.; Wine, D.W.; Osborn, T.D. Optical performance requirements for MEMS-scanner-based microdisplays. In Proceedings of the MOEMS and Miniaturized Systems, Ottawa, ON, Canada, 22 August 2000; SPIE: Bellingham, WA, USA, 2000; Volume 4178, pp. 176–186. [Google Scholar]
  64. Yalcinkaya, A.D.; Urey, H.; Brown, D.; Montague, T.; Sprague, R. Two-axis electromagnetic microscanner for high resolution displays. IEEE J. Microelectromech. Syst. 2006, 15, 786–794. [Google Scholar] [CrossRef]
  65. Mizuno, T.; Mita, M.; Kajikawa, Y.; Takeyama, N.; Ikeda, H.; Kawahara, K. Study of two-dimensional scanning LIDAR for planetary explorer. In Proceedings of the Sensors, Systems, and Next-Generation Satellites XII; SPIE: Bellingham, WA, USA, 2008; Volume 7106, p. 71061A. [Google Scholar]
  66. Park, I.; Jeon, J.; Nam, J.; Nam, S.; Lee, J.; Park, J.; Yang, J.; Ebisuzaki, T.; Kawasaki, Y.; Takizawa, Y.; et al. A new LIDAR method using MEMS micromirror array for the JEM-EUSO mission. In Proceedings of the 31st ICRC Conference, Lodz 2009, Commission C4; IUPAP: Singapore, 2009. [Google Scholar]
  67. Moss, R.; Yuan, P.; Bai, X.; Quesada, E.; Sudharsanan, R.; Stann, B.L.; Dammann, J.F.; Giza, M.M.; Lawler, W.B. Low-cost compact MEMS scanning ladar system for robotic applications. In Proceedings of the Laser Radar Technology and Applications XVI, Baltimore, MD, USA, 16 May 2012; SPIE: Bellingham, WA, USA, 2012; Volume 8379, p. 837903. [Google Scholar]
  68. Riu, J.; Royo, S. A compact long-range lidar imager for high spatial operation in daytime. In Proceedings of the 8th International Symposium on Optronics in Defence and Security; 3AF—The French Aerospace Society: Paris, France, 2018; pp. 1–4. [Google Scholar]
  69. Heck, M.J. Highly integrated optical phased arrays: Photonic integrated circuits for optical beam shaping and beam steering. Nanophotonics 2016, 6, 93–107. [Google Scholar] [CrossRef]
  70. Hansen, R.C. Phased Array Antennas; John Wiley & Sons: Hoboken, NJ, USA, 2009; Volume 213. [Google Scholar]
  71. Hutchison, D.N.; Sun, J.; Doylend, J.K.; Kumar, R.; Heck, J.; Kim, W.; Phare, C.T.; Feshali, A.; Rong, H. High-resolution aliasing-free optical beam steering. Optica 2016, 3, 887–890. [Google Scholar] [CrossRef]
  72. Sun, J.; Timurdogan, E.; Yaacobi, A.; Hosseini, E.S.; Watts, M.R. Large-scale nanophotonic phased array. Nature 2013, 493, 195. [Google Scholar] [CrossRef]
  73. Van Acoleyen, K.; Bogaerts, W.; Jágerská, J.; Le Thomas, N.; Houdré, R.; Baets, R. Off-chip beam steering with a one-dimensional optical phased array on silicon-on-insulator. Opt. Lett. 2009, 34, 1477–1479. [Google Scholar] [CrossRef] [PubMed]
  74. Eldada, L. Solid state LIDAR for ubiquitous 3D sensing, Quanergy Systems. In Proceedings of the GPU Technology Conference, 2018. [Google Scholar]
  75. Fersch, T.; Weigel, R.; Koelpin, A. Challenges in miniaturized automotive long-range lidar system design. In Proceedings of the Three-Dimensional Imaging, Visualization, and Display, Orlando, FL, USA, 10 May 2017; SPIE: Bellingham, WA, USA, 2017; Volume 10219, p. 102190T. [Google Scholar]
  76. Rabinovich, W.S.; Goetz, P.G.; Pruessner, M.W.; Mahon, R.; Ferraro, M.S.; Park, D.; Fleet, E.F.; DePrenger, M.J. Two-dimensional beam steering using a thermo-optic silicon photonic optical phased array. Opt. Eng. 2016, 55, 111603. [Google Scholar] [CrossRef]
  77. Laux, T.E.; Chen, C.I. 3D flash LIDAR vision systems for imaging in degraded visual environments. In Proceedings of the Degraded Visual Environments: Enhanced, Synthetic, and External Vision Solutions, Baltimore, MD, USA, 25 June 2014; SPIE: Bellingham, WA, USA, 2014; Volume 9087, p. 908704. [Google Scholar]
  78. Rohrschneider, R.; Masciarelli, J.; Miller, K.L.; Weimer, C. An overview of ball flash LiDAR and related technology development. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, American Institute of Aeronautics And Astronautics, Boston, MA, USA, 19–22 August 2013; p. 4642. [Google Scholar]
  79. Carrara, L.; Fiergolski, A. An Optical Interference Suppression Scheme for TCSPC Flash LiDAR Imagers. Appl. Sci. 2019, 9, 2206. [Google Scholar] [CrossRef]
  80. Gelbart, A.; Redman, B.C.; Light, R.S.; Schwartzlow, C.A.; Griffis, A.J. Flash lidar based on multiple-slit streak tube imaging lidar. In Proceedings of the Laser Radar Technology and Applications VII; SPIE: Bellingham, WA, USA, 2002; Volume 4723, pp. 9–19. [Google Scholar]
  81. McManamon, P.F.; Banks, P.; Beck, J.; Huntington, A.S.; Watson, E.A. A comparison flash lidar detector options. In Laser Radar Technology and Applications XXI; SPIE: Bellingham, WA, USA, 2016; Volume 9832, p. 983202. [Google Scholar]
  82. Continental Showcases Innovations in Automated Driving, Electrification and Connectivity. In Automotive Engineering Exposition 2018 Yokohama, Japan; Continental Automotive Corporation, 2018; Available online: https://www.continental.com/resource/blob/129910/d41e02236f04251275f55a71a9514f6d/press-release-data.pdf (accessed on 27 September 2019).
  83. Christian, J.A.; Cryan, S. A survey of LIDAR technology and its use in spacecraft relative navigation. In Proceedings of the AIAA Guidance, Navigation, and Control Conference; American Institute of Aeronautics And Astronautics: Reston, VA, USA, 2013; p. 4641. [Google Scholar]
  84. Lee, T. How 10 leading companies are trying to make powerful, low-cost lidar. ArsTechnica 2019, 1, 1–3. [Google Scholar]
  85. Jokela, M.; Kutila, M.; Pyykönen, P. Testing and Validation of Automotive Point-Cloud Sensors in Adverse Weather Conditions. Appl. Sci. 2019, 9, 2341. [Google Scholar] [CrossRef]
  86. Crouch, S. Advantages of 3D Imaging Coherent Lidar for Autonomous Driving Applications. In Proceedings of the 19th Coherent Laser Radar Conference, Okinawa, Japan, 18–21 June 2018. [Google Scholar]
  87. Elder, T.; Strong, J. The infrared transmission of atmospheric windows. J. Franklin Inst. 1953, 255, 189–208. [Google Scholar] [CrossRef]
  88. International Standard IEC 60825-1. Safety of Laser Products-Part 1: Equipment Classification and Requirements, 2007.
  89. Coldren, L.A.; Corzine, S.W.; Mashanovitch, M.L. Diode Lasers and Photonic Integrated Circuits; John Wiley & Sons: Hoboken, NJ, USA, 2012; Volume 218. [Google Scholar]
  90. Udd, E.; Spillman, W.B., Jr. Fiber Optic Sensors: An Introduction for Engineers and Scientists; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  91. Barnes, W.; Poole, S.B.; Townsend, J.; Reekie, L.; Taylor, D.; Payne, D.N. Er3+-Yb3+ and Er3+-doped fibre lasers. J. Lightw. Technol. 1989, 7, 1461–1465. [Google Scholar] [CrossRef]
  92. Kelson, I.; Hardy, A.A. Strongly pumped fiber lasers. IEEE J. Quantum Electron. 1998, 34, 1570–1577. [Google Scholar] [CrossRef]
  93. Koo, K.; Kersey, A. Bragg grating-based laser sensors systems with interferometric interrogation and wavelength division multiplexing. IEEE J. Lightw. Technol. 1995, 13, 1243–1249. [Google Scholar] [CrossRef]
  94. Fomin, V.; Gapontsev, V.; Shcherbakov, E.; Abramov, A.; Ferin, A.; Mochalov, D. 100 kW CW fiber laser for industrial applications. In Proceedings of the IEEE 2014 International Conference Laser Optics, St. Petersburg, Russia, 30 June–4 July 2014; p. 1. [Google Scholar]
  95. Wang, Y.; Xu, C.Q.; Po, H. Thermal effects in kilowatt fiber lasers. IEEE Photon. Technol. Lett. 2004, 16, 63–65. [Google Scholar] [CrossRef]
  96. Lee, B. Review of the present status of optical fiber sensors. Opt. Fiber Technol. 2003, 9, 57–79. [Google Scholar] [CrossRef]
  97. Paschotta, R. Field Guide to Optical Fiber Technology; SPIE: Bellingham, WA, USA, 2010. [Google Scholar]
  98. Sennaroglu, A. Solid-State Lasers and Applications; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  99. Huber, G.; Kränkel, C.; Petermann, K. Solid-state lasers: Status and future. JOSA B 2010, 27, B93–B105. [Google Scholar] [CrossRef]
  100. Zayhowski, J.J. Q-switched operation of microchip lasers. Opt. Lett. 1991, 16, 575–577. [Google Scholar] [CrossRef]
  101. Taira, T.; Mukai, A.; Nozawa, Y.; Kobayashi, T. Single-mode oscillation of laser-diode-pumped Nd: YVO 4 microchip lasers. Opt. Lett. 1991, 16, 1955–1957. [Google Scholar] [CrossRef]
  102. Zayhowski, J.; Dill, C. Diode-pumped microchip lasers electro-optically Q switched at high pulse repetition rates. Opt. Lett. 1992, 17, 1201–1203. [Google Scholar] [CrossRef]
  103. Zayhowski, J.J.; Dill, C. Diode-pumped passively Q-switched picosecond microchip lasers. Opt. Lett. 1994, 19, 1427–1429. [Google Scholar] [CrossRef] [PubMed]
  104. Młyńczak, J.; Kopczyński, K.; Mierczyk, Z.; Zygmunt, M.; Natkański, S.; Muzal, M.; Wojtanowski, J.; Kirwil, P.; Jakubaszek, M.; Knysak, P.; et al. Practical application of pulsed “eye-safe” microchip laser to laser rangefinders. Opt. Electron. Rev. 2013, 21, 332–337. [Google Scholar] [CrossRef]
  105. Zayhowski, J.J. Passively Q-switched microchip lasers and applications. Rev. Laser Eng. 1998, 26, 841–846. [Google Scholar] [CrossRef]
  106. Faist, J.; Capasso, F.; Sivco, D.L.; Sirtori, C.; Hutchinson, A.L.; Cho, A.Y. Quantum cascade laser. Science 1994, 264, 553–556. [Google Scholar] [CrossRef] [PubMed]
  107. Chow, W.W.; Koch, S.W. Semiconductor-Laser Fundamentals: Physics of the Gain Materials; Springer Science & Business Media: Berlin, Germany, 1999. [Google Scholar]
  108. Sun, H. A Practical Guide to Handling Laser Diode Beams; Springer: Berlin, Germany, 2015. [Google Scholar]
  109. Taimre, T.; Nikolić, M.; Bertling, K.; Lim, Y.L.; Bosch, T.; Rakić, A.D. Laser feedback interferometry: A tutorial on the self-mixing effect for coherent sensing. Adv. Opt. Photon. 2015, 7, 570–631. [Google Scholar] [CrossRef]
  110. VCSELs. Fundamentals, Technology and Applications of Vertical-Cavity Surface-Emitting Lasers; Springer: Heidelberg, Germany, 2013. [Google Scholar]
  111. Iga, K.; Koyama, F.; Kinoshita, S. Surface emitting semiconductor lasers. IEEE J. Quantum Electron. 1988, 24, 1845–1855. [Google Scholar] [CrossRef]
  112. Kogelnik, H.; Shank, C. Coupled-wave theory of distributed feedback lasers. J. Appl. Phys. 1972, 43, 2327–2335. [Google Scholar] [CrossRef]
  113. Bachmann, F.; Loosen, P.; Poprawe, R. High Power Diode Lasers: Technology and Applications; Springer: Berlin, Germany, 2007; Volume 128. [Google Scholar]
  114. Lang, R.; Kobayashi, K. External optical feedback effects on semiconductor injection laser properties. IEEE J. Quantum Electron. 1980, 16, 347–355. [Google Scholar] [CrossRef]
  115. Kono, S.; Koda, R.; Kawanishi, H.; Narui, H. 9-kW peak power and 150-fs duration blue-violet optical pulses generated by GaInN master oscillator power amplifier. Opt. Express 2017, 25, 14926–14934. [Google Scholar] [CrossRef]
  116. Injeyan, H.; Goodno, G.D. High Power Laser Handbook; McGraw-Hill Professional: New York, NY, USA, 2011. [Google Scholar]
  117. McManamon, P.F. Review of ladar: A historic, yet emerging, sensor technology with rich phenomenology. Opt. Eng. 2012, 51, 060901. [Google Scholar] [CrossRef]
  118. Rogalski, A. Infrared detectors: An overview. Infrared Phys. Technol. 2002, 43, 187–210. [Google Scholar] [CrossRef]
  119. Yu, C.; Shangguan, M.; Xia, H.; Zhang, J.; Dou, X.; Pan, J.W. Fully integrated free-running InGaAs/InP single-photon detector for accurate lidar applications. Opt. Express 2017, 25, 14611–14620. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  120. Capasso, F. Physics of avalanche photodiodes. Semicond. Semimetals 1985, 22, 1–172. [Google Scholar]
  121. Renker, D. Geiger-mode avalanche photodiodes, history, properties and problems. Nucl. Instrum. Methods Phys. Res. Sec. A Accel. Spectrom. Detect. Assoc. Equip. 2006, 567, 48–56. [Google Scholar] [CrossRef]
  122. Piatek, S.S. Physics and Operation of an MPPC; Hamamatsu Corporation and New Jersey Institute of Technology: Hamamatsu, Japan, 2014. [Google Scholar]
  123. Nabet, B. Photodetectors: Materials, Devices and Applications; Woodhouse Publishing: Exeter, UK, 2016. [Google Scholar]
  124. Yotter, R.A.; Wilson, D.M. A review of photodetectors for sensing light-emitting reporters in biological systems. IEEE Sens. J. 2003, 3, 288–303. [Google Scholar] [CrossRef]
  125. Melchior, H.; Fisher, M.B.; Arams, F.R. Photodetectors for optical communication systems. Proc. IEEE 1970, 58, 1466–1486. [Google Scholar] [CrossRef]
  126. Alexander, S.B. Optical Communication Receiver Design; SPIE Optical Engineering Press: London, UK, 1997. [Google Scholar]
  127. McManamon, P. LiDAR Technologies and Systems; SPIE Press: Bellingham, WA, USA, 2019. [Google Scholar]
  128. Kwok, K.N. Avalanche Photodiode (APD). In Complete Guide to Semiconductor Devices; Wiley-IEEE Press: Hoboken, NJ, USA, 2002; pp. 454–461. [Google Scholar]
  129. Zappa, F.; Lotito, A.; Giudice, A.; Cova, S.; Ghioni, M. Monolithic active-quenching and active-reset circuit for single-photon avalanche detectors. IEEE J. Solid State Circ. 2003, 38, 1298–1301. [Google Scholar] [CrossRef]
  130. Cova, S.; Ghioni, M.; Lotito, A.; Rech, I.; Zappa, F. Evolution and prospects for single-photon avalanche diodes and quenching circuits. J. Mod. Opt. 2004, 51, 1267–1288. [Google Scholar] [CrossRef]
  131. Charbon, E.; Fishburn, M.; Walker, R.; Henderson, R.K.; Niclass, C. SPAD-based sensors. In TOF Range-Imaging Cameras; Springer: Berlin, Germany, 2013; pp. 11–38. [Google Scholar]
  132. Yamamoto, K.; Yamamura, K.; Sato, K.; Ota, T.; Suzuki, H.; Ohsuka, S. Development of multi-pixel photon counter (MPPC). In Proceedings of the 2006 IEEE Nuclear Science Symposium Conference Record, San Diego, CA, USA, 29 October–1 November 2006; Volume 2, pp. 1094–1097. [Google Scholar]
  133. Gomi, S.; Hano, H.; Iijima, T.; Itoh, S.; Kawagoe, K.; Kim, S.H.; Kubota, T.; Maeda, T.; Matsumura, T.; Mazuka, Y.; et al. Development and study of the multi pixel photon counter. Nucl. Instrum. Methods Phys. Res. Sec. A Accel. Spectrom. Detect. Assoc. Equip. 2007, 581, 427–432. [Google Scholar] [CrossRef]
  134. Ward, M.; Vacheret, A. Impact of after-pulse, pixel crosstalk and recovery time in multi-pixel photon counter (TM) response. Nucl. Instrum. Methods Phys. Res. Sec. A Accel. Spectrom. Detect. Assoc. Equip. 2009, 610, 370–373. [Google Scholar] [CrossRef]
  135. Riu, J.; Sicard, M.; Royo, S.; Comerón, A. Silicon photomultiplier detector for atmospheric lidar applications. Opt. Lett. 2012, 37, 1229–1231. [Google Scholar] [CrossRef]
  136. Foord, R.; Jones, R.; Oliver, C.J.; Pike, E.R. The Use of Photomultiplier Tubes for Photon Counting. Appl. Opt. 1969, 8, 1975–1989. [Google Scholar] [CrossRef] [PubMed]
  137. Schwarz, B. LIDAR: Mapping the world in 3D. Nat. Photon. 2010, 4, 429. [Google Scholar] [CrossRef]
  138. Gotzig, H.; Geduld, G. Automotive LIDAR. In Handbook of Driver Assistance Systems; Springer: Berlin, Germany, 2015; pp. 405–430. [Google Scholar]
  139. Hecht, J. Lidar for Self-Driving Cars. Opt. Photon. News 2018, 29, 26–33. [Google Scholar]
  140. Rosique, F.; Navarro, P.J.; Fernández, C.; Padilla, A. A systematic review of perception system and simulators for autonomous vehicles research. Sensors 2019, 19, 648. [Google Scholar] [CrossRef] [PubMed]
  141. Heinrich, S. Flash Memory in the emerging age of autonomy. In Proceedings of the Flash Memory Summit, Santa Clara, CA, USA, 7–10 August 2017. [Google Scholar]
  142. Bijelic, M.; Gruber, T.; Ritter, W. A Benchmark for Lidar Sensors in Fog: Is Detection Breaking Down? In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 760–767. [Google Scholar]
  143. Rasshofer, R.H.; Spies, M.; Spies, H. Influences of weather phenomena on automotive laser radar systems. Adv. Radio Sci. 2011, 9, 49–60. [Google Scholar] [CrossRef] [Green Version]
  144. Duthon, P.; Colomb, M.; Bernardin, F. Light Transmission in Fog: The Influence of Wavelength on the Extinction Coefficient. Appl. Sci. 2019, 9, 2843. [Google Scholar] [CrossRef]
  145. Kim, G.; Eom, J.; Choi, J.; Park, Y. Mutual Interference on Mobile Pulsed Scanning LIDAR. IEMEK J. Embed. Syst. Appl. 2017, 12, 43–62. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Pulsed time-of-flight (TOF) measurement principle.
Figure 1. Pulsed time-of-flight (TOF) measurement principle.
Applsci 09 04093 g001
Figure 2. TOF phase-measurement principle used in amplitude modulation of a continuous wave (AMCW) sensors.
Figure 2. TOF phase-measurement principle used in amplitude modulation of a continuous wave (AMCW) sensors.
Applsci 09 04093 g002
Figure 3. Frequency modulation and detection in the frequency-modulated continuous-wave (FMCW) method: main parameters involved.
Figure 3. Frequency modulation and detection in the frequency-modulated continuous-wave (FMCW) method: main parameters involved.
Applsci 09 04093 g003
Figure 4. Triangular frequency modulation with time and linked amplitude signal change in the time domain.
Figure 4. Triangular frequency modulation with time and linked amplitude signal change in the time domain.
Applsci 09 04093 g004
Figure 5. Triangular modulation frequency signal and beat frequency for a moving target.
Figure 5. Triangular modulation frequency signal and beat frequency for a moving target.
Applsci 09 04093 g005
Figure 6. Schematics of a typical light detection and ranging (lidar) imaging system based on mechanical scanning.
Figure 6. Schematics of a typical light detection and ranging (lidar) imaging system based on mechanical scanning.
Applsci 09 04093 g006
Figure 7. Schematic diagram of the working principle of an optical phased array (OPA): emitted fields from each antenna interfere to steer a far-field pattern.
Figure 7. Schematic diagram of the working principle of an optical phased array (OPA): emitted fields from each antenna interfere to steer a far-field pattern.
Applsci 09 04093 g007
Figure 8. Detector array-based lidar diagram.
Figure 8. Detector array-based lidar diagram.
Applsci 09 04093 g008
Figure 9. Schematic diagram of the fiber laser.
Figure 9. Schematic diagram of the fiber laser.
Applsci 09 04093 g009
Figure 10. Schematic diagram of a microchip laser.
Figure 10. Schematic diagram of a microchip laser.
Applsci 09 04093 g010
Figure 11. Schematic diagram of a edge-emitting (EEL) diode laser.
Figure 11. Schematic diagram of a edge-emitting (EEL) diode laser.
Applsci 09 04093 g011
Figure 12. Schematic I-V curve in different photodetectors showing the different behaviour of gain.
Figure 12. Schematic I-V curve in different photodetectors showing the different behaviour of gain.
Applsci 09 04093 g012
Figure 13. Lidar imaging with 1 mrad spatial resolution in vertical and horizontal directions [68].
Figure 13. Lidar imaging with 1 mrad spatial resolution in vertical and horizontal directions [68].
Applsci 09 04093 g013
Table 1. Summary of working principles.
Table 1. Summary of working principles.
PulsedAMCWFMCW
Parameter measuredIntensity of emitted and received pulsePhase of modulated amplitudeRelative beat of modulated frequency, and Doppler shift
MeasurementDirectIndirectIndirect
DetectionIncoherentIncoherentCoherent
UseIndoor/OutdoorOnly indoorIndoor/Outdoor
Main advantageSimplicity of setup; long ambiguity rangeEstablished commerciallySimultaneous speed and range measures
Main limitationLow SNR of returned pulseShort ambiguity distanceCoherence length/Stability in operating conditions (e.g., thermal)
Depth resolution (typ)1 cm1 cm0.1 cm
Note: maximum attainable range has been avoided as it needs the definition of several other parameters (instantaneous FOV, reflectivity of target, eye safety level, etc.)
Table 2. Summary of imaging strategies.
Table 2. Summary of imaging strategies.
Mechanical ScannersMEMS ScannersOPAsFlashAMCWs
Working principleGalvos, rotating mirrors or prismsMEMS micromirrorPhased array of antennasPulsed flood illuminationPixelated phase meters
Main advantage360 deg FOV in horizontalCompact and lightweightFull Solid StateFast frame rateCommercial
Main disadvantageMoving elements, bulkyLaser power management, linearityLab-only for long-rangeLimited range/BlindableOnly indoor
Table 3. Summary: main features of sources for lidar.
Table 3. Summary: main features of sources for lidar.
Fibre LaserMicrochip LaserDiode Laser
Amplifying mediaDoped optical fibreSemiconductor crystalSemiconductor PN junction
Peak power (typ)>10 kW>1 kW0.1 kW
PRR<1 MHz<1 MHz≈100 KHz
Pulse width<5 ns<5 ns100 ns
Main advantagePulse peak power, PRR, beam quality. Beam deliveryPulse peak power, PRR, beam qualityCost, compact
Main disadvantageCostCost, beam deliveryMax output power and PRR. Beam quality
Note: these are typical values for reference when used for lidar. Individual components evolve continuously and may be different at the time of publishing or from specific providers. Further, several trade-offs related to laser performance need be considered (e.g., some DLs may have much larger peak power, but at smaller PRR values).
Table 4. Summary: main features of photodetectors for lidar.
Table 4. Summary: main features of photodetectors for lidar.
PINAPDsSPADsMPPCsPMTs
Solid stateYesYesYesYesNo
Gain (typ)1Linear (≈200)Geiger (10 4 )Geiger (10 6 )Avalanche (10 6 )
Main advantageFastAdjustable gain by biasSingle photon detectionSingle photon countingGain, UV detection
Main disadvantageLimited for low SNRLimited gainRecovery timeSaturable, bias voltage dependenceBulky, low QE, high voltage, magnetic fields
Note: these are typical values for reference. Individual components evolve continuously and may be different at the time of publishing or from specific providers.

Share and Cite

MDPI and ACS Style

Royo, S.; Ballesta-Garcia, M. An Overview of Lidar Imaging Systems for Autonomous Vehicles. Appl. Sci. 2019, 9, 4093. https://doi.org/10.3390/app9194093

AMA Style

Royo S, Ballesta-Garcia M. An Overview of Lidar Imaging Systems for Autonomous Vehicles. Applied Sciences. 2019; 9(19):4093. https://doi.org/10.3390/app9194093

Chicago/Turabian Style

Royo, Santiago, and Maria Ballesta-Garcia. 2019. "An Overview of Lidar Imaging Systems for Autonomous Vehicles" Applied Sciences 9, no. 19: 4093. https://doi.org/10.3390/app9194093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop